AUTONOMOUS 18-WHEELERS ARE now driving the highways. Coffee table gadgets are recognizing spoken English nearly as well as humans. Smartphones apps instantly translate conversations between people speaking as many as nine different languages. But for Dean Pomerleau, none of this is all that surprising.
Pomerleau built a self-driving car way back in 1989, when the first George Bush was president, and it navigated private roads using a neural network, the same AI technology that underpins modern gadgetry like the Amazon Echo and Microsoft Translator. This car wasn’t ready for the public highways, thanks to the limited computing power of the 20th century. But as a graduate student at Carnegie Mellon, Pomerleau was one of the few who understood the promise of this AI well before the world had the vast amounts of computing power and data needed to push it into everyday life. Now, he’s working to solve a much harder AI problem: fake news.
A quarter-century after his self-driving car appeared in Byte magazine, Pomerleau is an adjunct professor at Carnegie Mellon, and last month, as so many lamented the role of fake news in the presidential election, he put a call out on Twitter, challenging the AI community to build an algorithm that could identify fake news and remove from it from online services like Twitter, Google, and Facebook. It was an open-ended bet, with Pomerleau putting down $1,000. And the community took him up on it.
His bet now has a website, a Slack channel, a GitHub code repository, and a Twitter hashtag—#FakeNewsChallenge—and over the past several weeks, nearly 40 researchers, academics, engineers, and independent hackers have joined the grassroots project. Delip Rao, a machine learning expert who helped build the speech recognition system that underpins the Amazon Echo, recently put down another $1,000 in prize money. Pomerleau, Rao, and the rest of these hackers will now compete in teams, working over the next six months to identify fake news using neural networks and so many other AI techniques.
And they will fail.
Neural networks can recognize cats in YouTube videos, spot computer viruses, and even help a car drive down the road on its own. But they can’t identify fake news—at least not with real certainty. Part of the problem is that the characteristics of fake news stories are enormously hard to pin down. Recognizing what’s fake requires not just the kind of pattern recognition that AI is so good at. It requires human judgment, as Pomerleau himself acknowledges. A machine that can reliably identify fake news is a machine that has completely solved AI. “It would mean AI has reached human-level intelligence,” he says. What’s more, even humans can’t agree on what’s fake and what’s not. The news is always a tension between objective observation and subjective judgement. “In many cases, there is no right answer,” Pomerleau admits.
Pomerleau’s hope, rather, is that he and other researchers can build algorithms that mitigate the fake news problem—algorithms that can flag potentially fake news for humans to review. It’s another case of AI not exactly replacing humans but working alongside them, helping us perform tasks with greater speed and accuracy. If paired with human editors, the kinds of algorithms produced by Pomerleau’s challenge could indeed allow the likes of Google, Facebook, and Twitter to catch particularly egregious stories much quicker than before.
These companies are likely working on their own algorithms, and no doubt, they too see this AI as something that will operate alongside humans. Earlier this month, Yann LeCun, the head of AI research at Facebook, told a group of reporters that technology could solve the fake news problem. But like Pomerleau, he stopped short of saying it could solve the problem on its own. “The question is how does it make sense to deploy it?” he said. “And this isn’t my department.”
LeCun’s boss, Facebook CEO Mark Zuckerberg, knows that human eyes are also required. After all, this is how the company works to remove lewd photos and hate speech from its vast social network. Facebook has done both with considerable success through a combination of humanity and technology—sometimes more humanity than technology. And during a pre-election interview at Facebook headquarters, Zuckerberg told me this is also how the company will use new algorithms designed to predict when Facebook users are at risk of suicide. The technology will alert trained human professionals who can then evaluate the situation in full. “The sum of these two things is much more powerful than either of them by themselves,” he said.
Fake news is no different. “We need humans in the loop,” says Rao, the ex-Amazon Echo engineer who has joined the Fake News Challenge. “Expert judgment is indispensable.”
The Virtuous Circle
What AI experts like Rao can do is put a bigger dent in the problem. This starts with an online database filled with bogus stories.
Seven years ago, researchers at Stanford University started building a massive database of digital photos called ImageNet, hoping to facilitate the development of computer vision. Neural networks, you see, learn tasks by analyzing vast amounts of carefully labeled data. ImageNet was designed to feed these algorithms—and it worked. The world now has online services like Google Photos, which can instantly recognize objects and faces in digital pics. Rao and Pomerleau aim to build a similar database of fake news.
“This alone is a hard problem,” says Rao, who runs a machine learning consultancy called Joost Software. “We have to spend a lot of time just defining what fake news is.” They must separate parody sites and honest mistakes from blatantly fake news meant to deceive, while also deciding how to treat news that is exaggerated or twisted in some way.
The hope is that this database can help train all sorts of fake news algorithms—and that these algorithms can find a home at Snopes.com, Politifacts, or FactCheck.org, websites where humans are already working to separate the real from the fake. As with so many other AI projects, this could eventually create a virtuous circle of humanity, data, and AI. As algorithms and human fact checkers identify more and more fake news, this ever expanding collection of data can help create better algorithms—a virtuous circle.
Ultimately, this circle could include services like Facebook and Twitter. Just yesterday, Facebook announced that it will work with sites like Snopes and Factcheck.org to identify fake news on its own network. If people like Pomerleau add reliable AI algorithms to the mix, Facebook could potentially catch egregious stories with greater speed—perhaps even before they go viral. “Our efforts just became a whole lot more relevant,” Pomerleau said over Slack as Facebook unveiled its new policies.
But so many hurdles loom.
This week, two emails turned up in my inbox, both related to fake news. One pointed me to Pomerleau and his AI contest. The other carried the subject line “More on Fake News Reports,” and when opened, it listed what it described as eight leading sources of fake news. This included CNN, the Associated Press, The New York Times, and Hillary Clinton. And then, as a kicker, this email suggested I watch Fox News instead. Pomerleau and his hackers have no intention of shuttling The New York Times into their database. And this country of ours includes so many people who would gladly tag Fox News as fake. Which is only to say that no database will please everyone.
Pomerleau’s Fake News Challenge is more relevant than ever—but also more challenging. Even as he was hailing Facebook’s most recent moves to quash fake news, his hashtag—#FakeNewsChallenge—was being hijacked by conspiracy theorists calling for a boycott on CNN over purported lies about Donald Trump, questioning whether Barack Obama was born in the US, and generally spewing hate speech at ethnic minorities. These tweets piled up at a rate of about 25 a minute. “Do we really think a method of flagging fake news on social media stands a chance against that onslaught?” Pomerleau said. “I’m at a loss.”