Why humans and AI are stuck in a standoff against fake news
Fake news is a scourge on the global community. Despite our best efforts to combat it, the problem lies deeper than just fact-checking or squelching publications that specialize in misinformation. The current thinking still tends to support an AI-powered solution, but what does that really mean?
According to recent research, including this paper from scientists at the University of Tennessee and the Rensselaer Polytechnic Institute, we’re going to need more than just clever algorithms to fix our broken discourse.
The problem is simple: AI can’t do anything a person can’t do. Sure, it can do plenty of things faster and more efficiently than people – like counting to a million – but, at its core, artificial intelligence only scales things people can already do. And people really suck at identifying fake news.
According to the aforementioned researchers, the problem lies in what’s called “ confirmation bias .” Basically, when a person thinks they know something already they’re less likely to be swayed by a “fake news” tag or a “dubious source” description.
Per the team’s paper :
This makes it incredibly difficult to design, develop, and train an AI system to spot fake news.
While most of us may think we can spot fake news when we see it, the truth is that the bad actors creating misinformation aren’t doing so in a void: they’re better at lying than we are at telling the truth. At least when they’re saying something we already believe.
The scientists found people – including independent Amazon Mechanical Turk workers – were more likely to incorrectly view an article as fake if it contained information contrary to what they believed to be true.
On the flip-side, people were less likely to make the same mistake when the news being presented was considered part of a novel news situation. In other words: when we think we know what’s going on, we’re more likely to agree with fake news that lines up with our preconceived notions.
While the researchers do go on to identify several methods by which we can use this information to shore up our ability to inform people when they’re presented with fake news, the gist of it is that accuracy isn’t the issue. Even when the AI gets it right we’re still less likely to believe a real news article when the facts don’t line up with our personal bias.
This isn’t surprising. Why should someone trust a machine built by big tech in place of the word of a human journalist? If you’re thinking: because machines don’t lie, you’re absolutely wrong.
When an AI system is built to identify fake news it, typically, has to be trained on pre-existing data . In order to teach a machine to recognize and flag fake news in the wild we have to feed it a mixture of real and fake articles so it can learn how to spot which is which. And the datasets used to train AI are usually labeled by hand, by humans.
Often this means crowd-sourcing labeling duties to a third-party cheap labor outfit such as Amazon’s Mechanical Turk or any number of data shops that specialize in datasets, not news. The humans deciding whether a given article is fake may or may not have any actual experience or expertise with journalism and the tricks bad actors can use to create compelling, hard-to-detect, fake news.
And, as long as humans are biased, we’ll continue to see fake news thrive. Not only does confirmation bias make it difficult for us to differentiate facts we don’t agree with from lies we do, but the perpetuation and acceptance of outright lies and misinformation from celebrities, our family members, peers, bosses, and the highest political offices makes it difficult to convince people otherwise.
While AI systems can certainly help identify egregiously false claims, especially when made by news outlets who regularly engage in fake news, the fact remains that whether or not a news article is true isn’t really an issue to most people.
Take, for instance, the most watched cable network on television: Fox News. Despite the fact that Fox News lawyers have repeatedly stated that numerous programs – including the second highest-viewed program on its network, hosted by Tucker Carlson – are actually fake news .
Per a ruling in a defamation case against Carlson, U.S. District Judge Mary Kay Vyskocil — a Trump appointee — ruled in favor of Carlson and Fox after discerning that reasonable people wouldn’t take the host’s everyday rhetoric as truthful:
And that’s why, under the current news paradigm, it may be impossible to create an AI system that can definitively determine whether any given news statement is true or false.
If the news outlets themselves, the general public, elected officials, big tech, and the so-called experts can’t decide whether a given news article is true or false without bias, there’s no way we can trust an AI system to do so. As long as the truth remains as subjective as a given reader’s politics, we’ll be inundated with fake news.
Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here .
Why the United Nations urgently needs its own regulation for AI
The European Commission recently published a proposal for a regulation on artificial intelligence (AI) . This is the first document of its kind to attempt to tame the multi-tentacled beast that is artificial intelligence .
“ The sun is starting to set on the Wild West days of artificial intelligence ,” writes Jeremy Kahn. He may have a point.
When this regulation comes into effect, it will change the way that we conduct AI research and development. In the last few years of AI, there were few rules or regulations: if you could think it, you could build it. That is no longer the case, at least in the European Union.
There is, however, a notable exception in the regulation, which is that it does not apply to international organizations like the United Nations.
Naturally, the European Union does not have jurisdiction over the United Nations, which is governed by international law . The exclusion, therefore, does not come as a surprise but does point to a gap in AI regulation. The United Nations, therefore, needs its own regulation for artificial intelligence, and urgently so.
AI in the United Nations
Artificial intelligence technologies have been used increasingly by the United Nations. Several research and development labs, including the Global Pulse Lab , the Jetson initiative by the UN High Commissioner for Refugees , UNICEF’s Innovation Labs , and the Centre for Humanitarian Data have focused their work on developing artificial intelligence solutions that would support the UN’s mission, notably in terms of anticipating and responding to humanitarian crises.
United Nations agencies have also used biometric identification to manage humanitarian logistics and refugee claims. The UNHCR developed a biometrics database that contained the information of 7.1 million refugees . The World Food Program has also used biometric identification in aid distribution to refugees, coming under some criticism in 2019 for its use of this technology in Yemen .
In parallel, the United Nations has partnered with private companies that provide analytical services. A notable example is the World Food Programme, which in 2019 signed a contract worth US$45 million with Palantir , an American firm specializing in data collection and artificial intelligence modeling.
No oversight, regulation
In 2014, the United States Bureau of Immigration and Customs Enforcement (ICE) awarded a US$20 billion-dollar contract to Palantir to track undocumented immigrants in the U.S. , especially family members of children who had crossed the border alone. Several human rights watchdogs, including Amnesty International , have raised concerns about Palantir for human rights violations.
Like most AI initiatives developed in recent years, this work has happened largely without regulatory oversight. There have been many attempts to set up ethical modes of operation, such as the Office for the Co-ordination of Humanitarian Affairs’ Peer Review Framework , which sets out a method for overseeing the technical development and implementation of AI models.
In the absence of regulation, however, tools such as these, without legal backing, are merely best practices with no means of enforcement.
In the European Commission’s AI regulation proposal, developers of high-risk systems must go through an authorization process before going to market, just like a new drug or car. They are required to put together a detailed package before the AI is available for use, involving a description of the models and data used, along with an explanation of how accuracy, privacy and discriminatory impacts will be addressed.
The AI applications in question include biometric identification, categorization and evaluation of the eligibility of people for public assistance benefits and services. They may also be used to dispatch of emergency first response services — all of these are current uses of AI by the United Nations.
Building trust
Conversely, the lack of regulation at the United Nations can be considered a challenge for agencies seeking to adopt more effective and novel technologies. As such, many systems seem to have been developed and later abandoned without being integrated into actual decision-making systems.
An example of this is the Jetson tool, which was developed by UNHCR to predict the arrival of internally displaced persons to refugee camps in Somalia. The tool does not appear to have been updated since 2019, and seems unlikely to transition into the humanitarian organization’s operations. Unless, that is, it can be properly certified by a new regulatory system.
Trust in AI is difficult to obtain, particularly in United Nations work, which is highly political and affects very vulnerable populations. The onus has largely been on data scientists to develop the credibility of their tools.
A regulatory framework like the one proposed by the European Commission would take the pressure off data scientists in the humanitarian sector to individually justify their activities. Instead, agencies or research labs who wanted to develop an AI solution would work within a regulated system with built-in accountability. This would produce more effective, safer and more just applications and uses of AI technology.
Article by Eleonore Fournier-Tombs , Adjunct Professor, University of Ottawa; Senior Consultant, World Bank, McGill University This article is republished from The Conversation under a Creative Commons license. Read the original article .
This free AI chatbot helps businesses fight COVID-19 misinformation
Avaamo , a company that specializes in conversational AI, recently built a virtual assistant to translate natural language queries about the COVID-19 pandemic into reliable insights. In other words, it’s an AI-powered chatbot that can answer just about any question you have about the pandemic.
Credit: Avaamo
Avaamo’s Project COVID uses a deep learning system called natural language processing to turn our questions about the pandemic into website and database queries. It works a lot like Google or Bing, you input text and the AI tries to find the most relevant information possible. The big difference is that Avaamo carefully guards the gates against misinformation by only surfacing results from reputable websites such as CDC, NIH, WHO, and Johns Hopkins.
Best of all, there’s no cost for businesses to use the tool. Per a press release from Avaamo:
Quick take: This is really cool. The information is valid, the company’s recently updated the system, and it doesn’t appear that any business that wishes to implement Project COVID on their own sites will have to jump through hoops to get it installed.
Having a simple, non-medical (Avaamo is careful to point out that this is not a substitute for professional medical advice) solution to providing solid information to common COVID-19 questions is a big deal. We’ve moved past static FAQs because the pandemic is an ever-changing situation. This represents a great way to keep clients, patients, and customers up-to-date without having to reinvent the wheel to do it.
For information check out Avaamo’s website .