Researchers have developed an AI system meant to spot and flag fake news. The model scours a public dataset of phony news, alerts users, and redirects them to verified information sources. It’s part of a growing number of AI methods for countering false news. “The amount of information flowing throw the internet, especially social networks, is massive and cannot be handled manually, especially with high accuracy,” Wael AbdAlmageed, a computer engineering professor at the University of Southern California, who has developed AI algorithms for detecting visual misinformation, told Lifewire in an email interview. “It is important to monitor and flag misinformation in real-time since once misinformation starts propagating, it is hard to convince people that the information is false, especially when misinformation confirms our biases,” he added.
Keeping It Real
The AI technique developed by a team at Australia’s Macquarie University could help reduce the spread of fake news. The model can be incorporated into an app or web software and offers links to relevant ‘true’ information that aligns with the interests of each user. “When you read or watch news online, often news stories about similar events or topics are suggested for you using a recommendation model,” Shoujin Wang, a data scientist at Macquarie University who worked on the research, said in the news release. Wang says that accurate news and fake news for the same event often use different content styles, confusing computer models into treating them as news for different events. Macquarie University’s model ‘disentangles’ the information of each news item into two parts: the signs showing whether the news item is fake and the event-specific information showing the topic or event the news story is about. The model then looks for patterns in how users shift between different news pieces to predict which news event the user may be interested in reading next. The research team trained the model on a public dataset of fake news published on GitHub, called FakeNewsNet, which stores fake news from PolitiFact and GossipCop along with such data as news content, social context, and user reading histories.
The Growth of Fake News
Fake news is a growing problem, studies suggest. NewsGuard has found that a significant portion of the growth in social media came from unreliable websites. In 2020, 17 percent of engagement among the top 100 news sources came from Red-rated (generally unreliable) sites, compared to approximately 8 percent in 2019. Subramaniam Vincent, the director of Journalism and Media Ethics at Markkula Center for Applied Ethics at Santa Clara University, told Lifewire in an email interview that AI can help counter disinformation. The technology can be used for “monitoring account behavior for orchestrated sharing correlated with hate speech or already debunked claims or being debunked by fact-checkers or known propagandist state entities or nascent groups with rapid membership rise,” Vincent explained. “AI can also be used along with design to flag content of particular types to add friction before they are shared.” AbdAlmageed said that social networks need to integrate fake news detection algorithms as part of their recommendation algorithms. The goal, he said, is to “flag fake news as fake or not accurate if they do not want to completely prevent sharing fake news.” That said, while AI might be useful for countering fake news, the approach has its downsides, Vincent said. The problem is that AI systems cannot understand the meaning of human speech and writing, so they will always be behind the curve. “The more accurate AI might get with some forms of overt hate speech and disinformation, the more human culture will move to newer code and subterranean meaning transmission to organize,” Vincent said. Wasim Khaled, CEO of disinformation monitoring company Blackbird.AI, said in an email to Lifewire that online disinformation is an evolving threat. New AI systems need to be able to predict where fake news will pop up next. “In most cases, you cannot build an AI product and call it done,” Khaled said. “Behavioral patterns change over time, and it’s important that your AI models keep up with these changes.”