CommentIQ, a new tool developed at the University of Maryland, combines interactive visuals and machine learning to automatically rate reader-submitted comments on editorial metrics such as readability, relevance, sentiment, profanity, and authority. CommentIQ has been deployed at the New York Times, the Wall Street Journal, the Washington Post, and the Baltimore Sun.
by Deok Gun Park and Niklas Elmqvist
In a perfect world, comment sections on the internet are intellectual forums where everyone can exchange thoughtful, polite, and constructive ideas on an online article, video, or social media post. However, in the real world, this is typically far from the case, and the comment sections on many websites are notorious for being filled with nonsense, repetition, insults, and even profanity, misogyny, and racism. Left unchecked, this behavior often leads to toxic communities that contribute little to the hosting website. As a result, many online communities are choosing to disable public commenting altogether.
Maintaining high-quality comments typically requires employing people that moderate submitted comments to minimize toxic, vulgar, or irrelevant messages. However, finding good comments amounts to the famous needle in the haystack, so this practice is costly and time-consuming, particularly for high-traffic websites. For example, the New York Times employs a dozen so-called “community managers” whose job it is to read all comments submitted by readers for a particular article and either discard them, accept them for posting, or recognize them as “NYT Picks”: particularly insightful, humorous, or valuable comments. As a result, the New York Times comment section is generally recognized as a community of insightful, intelligent, and entertaining commenters who genuinely augment the value of each news article published. However, few websites have the resources of the New York Times for hiring a sufficient number of moderators to manage all of the submitted comments, so instead tend to rely mostly on readers themselves reporting questionable comments. Unfortunately, this puts undue burden on the readers and often results in lower quality of the general discussion.
Our team at University of Maryland’s Human-Computer Interaction Laboratory (HCIL), one of the oldest research labs for HCI in the country, recently designed a new online tool called CommentIQ for helping newspaper moderators finding needles in haystacks of reader-submitted comments. The system allows moderators to work more efficiently in reading, rating, and curating large quantities of such comments. The tool is based on a core of six comment-specific and six user-specific metrics that are used to classify each comment submitted for a particular online article. All these metrics were designed in close collaboration with editors experienced in curating reader-submitted content, and takes into account the “eye” that these editors turn on such comments. Comment-specific metrics include the length, relevance, and readability of the comment, whereas user-specific metrics include the historical rating of the commenter, their personal experience, and their past behavior. These metrics are then used to rate a collection of comments for an article using a web interface that combines interactive visualizations with a set of streamlined controls for ranking the comments. This allows a moderator to quickly identify a varied set of comments based on different metrics.
We have deployed the CommentIQ system in newsrooms at the New York Times, the Washington Post, the Wall Street Journal, and the Baltimore Sun. Working with our partners at these sites from the inception of the project, our design process has been driven by the unique needs and problems that editors and moderators encounter during their work. Initial feedback on the finalized CommentIQ system has been very positive, and we look forward to deploy it in production use in the future.
This effort is part of a Knight Foundation funded prototype grant called CommentIQ (http://comment-iq.com/). The CommentIQ team is led by Dr. Nicholas Diakopoulos, and assistant professor in the Philip Merrill College of Journalism at the University of Maryland (UMD). The main developer of the CommentIQ system is Deok Gun Park, a Ph.D. student in Computer Science at UMD. Simranjit Singh, a masters student at the UMD College of Information Studies, built the majority of the server-side text classification metrics. Finally, Dr. Niklas Elmqvist, an associate professor of information studies at the UMD iSchool, contributed his expertise on visual analytics to the project. The project has allowed us to begin working with a larger effort called Coral Project (https://coralproject.net/) that is working to reinvent commenting for news websites. We have also begun collaborating with the Washington Post on additional questions that were exposed from working on this project.