Using Machine Learning to Stop Fake News
Given all the brilliant things that are happening today with machine learning and artificial intelligence, I just don’t understand why “fake news” is still an issue. I think the solution is right in front of us; that is, if social media networks are really serious about addressing this problem.
Facebook is one of the biggest culprits in tolerating fake news, and that probably has a lot to do with the “economics of social engagement.” An article titled “Future of Social Media” summarizes the challenge nicely:
“While it’s great that everyone and her brother has access to create content online, offering a more diverse and thriving online market, this also generates stronger competition for your content to break through the clutter and be seen.
In fact, there will be a time in which the amount of content internet users can consume will be outweighed by the amount of content produced. Schaefer calls this “Content Shock” which, unfortunately, is uneconomical.”
Figure 1 shows the area of “Content Shock,” when the ability to create content outstrips the ability for humans to consume it.
The article recommends to “create content that will stand out” in order to draw attention and create engagement. Well, nothing draws attention and creates engagement like “fake news”. For example, here are some examples of fake news articles and the number of Facebook engagements each of these articles drove:
- “Pope Francis shocks world, endorses Donald Trump for president” – 960,000 Facebook engagements
- “WikiLeaks confirms Hillary sold weapons to ISIS … Then drops another bombshell” – 789,000 Facebook engagements
- “FBI agent suspected in Hillary email leaks found dead in apartment murder-suicide” – 567,000 Facebook engagements
That’s an awful lot of Facebook engagements with news that isn’t true, but the “news” certainly does “stand out” in the crowded content space and it certainly does drive engagement.
Solving the Fake News Problem
So assuming that the social media networks truly are motivated to solve the “fake news” problem, here is how I would do it.
- Step 1: Leverage crowdsourcing to flag potential fake news articles. Social media networks could create a “Fake News” button that flags potential fake news, like Yahoo Mail does today to flag potential spam (see Figure 2).
- Step 2: Human Reviewers would need to review the flagged “Fake News” articles to determine which ones are fake and which ones are not fake. Maybe the Reviews could even add additional information (metadata data?) that captures information such as “degree of fakeness” (i.e., is it an outright lie or is it just a slight twisting of the facts) and “severity of fakeness” (i.e., fake news about a celebrity isn’t nearly as severe as fake news about a political candidate. Heck, there are certain celebrities whose fame seems to be based entirely upon fake news… the Kardashians?).
- Step 3: Apply Supervised Machine Learning algorithms against the flagged potential “fake news” articles to find (quantify) correlations and predictors (i.e., combinations of words, phrases and topics) of “fake news” outcomes. Then use the resulting “fake news” models on new articles to score the article’s “level of fakeness.” Remember, Supervised Machine Learning algorithms identify and quantify relationships between potential predictive variables and metrics against known outcomes (e.g., spam, fraudulent transaction, machine failure, web click, purchase transaction) gathered from historical (training) data sets and then applies the models to new data sets.
- Step 4: Create “Reader Credibility Scores” to rank credibility of people flagging fake news articles. It is critical to create reader credibility scores (think FICO score or Uber driver and passenger scores) to measure the integrity of folks who are flagging potential fake news (as well as those that are also promoting fake news). That will help to identify “trolls” who are just trying to perpetuate the fake stories or cast doubt on real news.
Amazon already supports the flagging of potential “Trolls” and “fake reviews” in their customer reviews (see Figure 3).
- Step 5: Create “Publisher Credibility Scores” that measures the credibility and reliability of each publisher or source of the article. This score would be comprised of the results of the fakeness analysis (how many fake articles is that publisher responsible for) but could also include other variables such as number of employees working for the publisher and tenure in the business (e.g., Wall Street Journal has around 3,600 employees and has been publishing since 1851 versus Liberty Writers News which has 2 employees and has been publishing since only 2015). Heck, there is even a Wikipedia page “List of fake news websites” that lists known fake news sites, such as Liberty Writers News, American News, Disclose TV, Drudgereport.com and World Truth TV.
Freedom of Speech and Type I/Type II Errors
Machine Learning could certainly help to mitigate and flag fake news, but probably cannot and should not even try to eliminate it entirely. Why? It’s the First Amendment of the Constitution and it’s called Freedom of Speech.
One important consideration as social media organizations look to squelch fake news is to not violate Freedom of Speech. So instead of an outright deletion of questionable publications (other than for pornographic, liable or hate crime reasons), it might be better for the social media sits to use some sort of “Degrees of Truth” indicator that could accompany each publication or article. These indicators might look like something in Figure 4.
The cost to society of letting a few fake news articles to get published (false positive) greatly outweighs the potential costs of blocking potentially valid news (false negatives). So one will need to err on the side of allowing some level of fake news to ensure that one is not blocking real (though maybe controversial) news. See my blog “Understanding Type I and Type II Errors” to learn more about the potential costs and liabilities associated with Type I and Type II errors.
Machine Learning to End of Fake News
Ending Fake News seems like the perfect application of machine learning. Organizations like Yahoo, Google and Microsoft have been using machine learning for years now to catch spam (see article “Google Says Its AI Catches 99.9 Percent Of Gmail Spam”.) And companies like McAfee and Symantec employee machine learning to catch viruses (see article “Malware Detection with Machine Learning Methods”.)
Fake news looks a lot like spam and a virus to me. Should be an easy problem to solve, if one really wants to.
 A troll is a person who sows discord on the Internet by starting arguments or upsetting people, by posting inflammatory, extraneous, or off-topic messages with the intent of provoking readers into an emotional response or of otherwise disrupting normal, on-topic discussion. https://en.wikipedia.org/wiki/Internet_troll