AI-Powered information operations and new citizenship
Digital information is power, and today citizens have this new power at their fingertips, channeled through reactions, comments, shares, saves and searches on our everyday digital platforms. However, this new power is ubiquitous and its direct effects remain obfuscated by the AI-powered black boxes of tech giants.
Unfortunately, we’re many times too eager to tell Mark Zuckerberg, Sundar Pichai, or Jack Dorsey how to change their platform policies and algorithms to make the world a better place, but simultaneously failing to answer this:
What are our responsibilities as citizens in this new reality in which digital and physical, political and commercial, and private and public are seamlessly interwoven?
Today, citizens need new skills for understanding the complex and multidimensional power of digital information and its relationship to democratic society.
Trying to be up-to-date on what’s happening around us is a human condition. Today, everyone is trying to use that condition to catch your attention. And increasingly, AI-powered algorithmic systems decide what kind of information gets through to you .
The way information surfaces on your attention has changed. The way you can consume information has changed. The way you can evaluate information has changed. And the way you can react to information has changed.
Controlling information has always been power—or connected to power. Today’s algorithmically amplified information operations are powered on steroids.
In today’s world election campaigns spend unprecedented amounts of resources on digital platforms, trying to target the right people in the right time in the right place. An information operation in social media can make millions of people take to the streets under the same banner across the globe. Or a social video app of a foreign origin can be used to affect how people participate unpredictably in a local political event.
At the same time, powerful personalized computational propaganda can reach you day and night wherever you are. The people, organizations and machines behind malicious information operations are using, misusing and abusing the current mainstream platforms, such as Facebook, Twitter, Instagram and Youtube, to spread radicalizing material across the globe.
As a result, the way you can manifest your citizenship online and offline has changed. Through your algorithmic information flows and interfaces you have the power to influence—directly and indirectly—on other people’s opinions and choices, on the polls and on the streets.
This fundamental change affects your capacity to use your digital tools and services in an ethical and sustainable way.
No single platform or technology can alone solve the socio-technological challenges caused by information operations and computational propaganda . Developing methods against digital propaganda requires international multidisciplinary collaboration among tech companies, academia, societal powers, news media and educational institutions.
But, to truly get to the bottom of the issue, we need to remember that regardless of the huge power of tech giants or new regulations affecting social media platforms, individuals do have a crucial role in making our digital platforms safer for everyone.
Importantly, new citizenship skills are required for helping people to act more responsibly on digital platforms.
First, data literacy and algorithm literacy are needed for understanding the basic qualities and effects of data and algorithms that are constantly at work, influencing directly what you see, think and do online and beyond.
Data literacy lets you assess and observe your data trails and their usage in different systems. Algorithmic literacy gives you a basic idea and awareness on how different AI-powered systems personalize your experience, and how the power of algorithms is used to influence your interpretations, expectations and decision-making. These skills also make you more aware of your own (data) rights in digital platforms.
Could someone design and develop an engaging tool that would help you in achieving data and algorithm literacy, simultaneously being as frictionless as today’s mainstream social apps?
Second, up-to-date digital media literacy allows you to make more sense of your feeds that are a continuously changing algorithmic bricolage of serious and entertaining, fact and fiction, news and marketing as well as disinformation and misinformation.
Digital media literacy enables you to recognize benign and malicious information operations and tell the difference between deliberate spreading of disinformation and unconscious sharing of misinformation. In short, it empowers you to be more thoughtful in reacting to varying information operations that you experience online.
Importantly, more data-aware, algorithm-informed and digital-media-literate users can demand more sustainable and ethical choices from their digital platforms. At the same time, we need a new practice of citizen experience design that brings citizen-centric thinking and values into the very core of AI design and development.
It’s time to start talking more seriously and thoughtfully about the responsibilities of individuals, and the new citizenship skills that are required in today’s social media and tech platforms. In the long run, these new emerging citizenship skills will be crucial for the democratic societies across the globe.
Does AI get more hype than it deserves?
How different would we think about artificial intelligence if AI pioneers Allen Newell and Herbert Simon had won support for the seemingly less hype-prone term of “complex information processing,” rather than “artificial intelligence,” which was ultimately adopted by the field?
On the surface, this thought experiment is interesting because it asks if artificial intelligence is intrinsically hyped. That is, is the word alone enough to get us in trouble? This was the focus of a recent Wall Street Journal article where columnist Christopher Mims asks experts in artificial intelligence whether the name alone produces confusion and hype?
Mims quotes Melanie Mitchell , a professor at the Santa Fe Institute, who quips, “What would the world be like if it [AI] was called that [complex information processing] instead?” Unfortunately, Mims uses Mitchell’s thought experiment as a punchline at the very end of the article, not as a counterfactual. Therefore, we will explore what facts surround the rhetorical question and answer whether the world would be better without artificial intelligence.
Newell and Simon were the only two participants at the Dartmouth Workshop (considered to be the founding event for artificial intelligence as a field) in 1956, who had developed a rudimentary “intelligent” machine. Their experience is viewed as grounded in practice, and consequently, they are often seen favorably as being more pragmatic than their peers. Complex information processing, as an alternative to artificial intelligence, seems to be viewed through this prism. Perhaps it is true that Newell and Simon understood how tricky “intelligence” would be to define or solve. To be sure, artificial intelligence is hyped because of its connection to natural intelligence and research on tasks performed by humans.
Whether the world would be a better place with complex information processing still requires us to determine if complex information processing was envisaged any differently than artificial intelligence and if artificial intelligence was treated differently by these two men than their peers. Alas, it does not appear that Newell and Simon thought complex information processing was less hype-prone or treated artificial intelligence differently from their peers.
Simon summarized his research goals in aspirational terms such as “to make us [humans] a part of nature instead of apart from it.” Aside from beautiful alliteration, Simon’s goal was to discover and ultimately build an intelligent machine that would exceed human intelligence and replace the capacities of the human brain. To Simon, solving intelligence was not merely an intellectual challenge but a spiritual pursuit.
The philosopher, and professor at the University of California, John Searle, explains how Simon would say that we already have machines that can literally think. According to Searle, there was no question of waiting for some future machine because Simon celebrated existing computers already having thoughts in the same sense that humans do.
Newell and Simon were computationalists and true believers at that. They believed that intelligence is information processing. They did not believe in the metaphor of a computer being like a brain, or aspects of cognition involving information processing, but literally thought that cognition is information processing.
Newell and Simon accepted technical descriptions of a computer as philosophical arguments. Simon believed that neurological functions of the brain and the functions of computers could be shown to process information in the same way. He was wrong. Simon also explained how computers have properties of the mind and “thoughts.” However, “thoughts” are what computers do not have.
Complex information processing was meant to inspire a view of the brain as a computer, very much like artificial intelligence. This view expresses a falsehood: mainly that any differences between humans and machines are small and temporary. The counterfactual question of what the world would be like if AI was called that complex information processing is fallacious because complex information processing is just another inimical theory of the mind.
One thing is sure; the Dartmouth Workshop produced copious amounts of hype that still permeates today. Perhaps the reason for the hype is that funding was required. Funding required drumming up public and political support which required an exciting vision that Simon and the other founders did. Perhaps hype was a consequence of the very human desire for personal prestige and naming the field was an extension of that desire. Perhaps unlocking the mystery of human intelligence is inherently hyped. Perhaps AI hype is both a bug and a feature. Perhaps we should stop caring so much and focus more on solving problems that are of interest to people not in AI.
The answer to what the world would be like if Newell and Simon had won support for complex information processing is impossible to know. What makes thought experiments like this one banal is that such a counterfactual does not matter. What makes history uniquely historic is that it does not occur differently. In other words, a different name in the past will not produce different results in the present. While I am not convinced that complex information processing would have been any less hyped, I am also not convinced that less hype would have been good for the field. The most we can hope for are better results in the future with the names we already have.
This thought experiment highlights something else, which is the power insiders believe names contain. Names represent an important part of the culture and values of insiders, who argue about the name of the field, the names of solutions, whether a solution deserves a name, and ultimately whether the field deserves a better name in the past. Perhaps such a thought experiment should include the counterfactual that encourages us to stop naming solutions. After all, arguing about names is not helpful for solving problems. Our solutions do not require names to impact the lives of others.
This article was originally published by Rich Heimann on TechTalks , a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. You can read the original article here .
Trump’s 2016 campaign targeted ads to deter 3.5M Black Americans from voting, leaks reveal
Donald Trump’s 2016 US presidential election campaign worked with Cambridge Analytica to deter millions of Black Americans from voting by targeting them with anti-Hillary Clinton ads, according to a massive database obtained by Britain’s Channel 4 News .
The broadcaster claims Trump’s digital campaign team used the cache to build psychological profiles of millions of US citizens across 16 key battleground states. An algorithm then divided them into eight different categories, including core supporters on each side and “deadbeats” that were unlikely to vote.
Another segment of people that the campaign wanted to dissuade from voting was marked as “Deterrence.” A total of 3.5 million Black Americans were placed in this group, and then micro-targeted with ads on Facebook and other platforms, such as videos of Clinton’s notorious “super predators” speech .
The database showed that Black voters were disproportionately marked for deterrence. In Georgia, for example, they made up just 32% of the population, but 61% of those targeted for deterrence.
Channel 4 believes this tactic helped collapse Black voters in several states where Trump won surprising victories over Clinton by razor-thin margins.
In the swing state of Wisconsin, turnout among Black voters dropped by 19%, helping Trump win the state by just 23,000 votes. Trump later bragged that the Black vote had helped him win the election.
According to Channel 4, the Trump campaign spent $44 million on Facebook and posted almost 6 million ads. But as many of the posts disappeared after the campaign stopped paying for them and Facebook didn’t have an “Ad Library” in 2016, there’s no public record of the ads.
The Trump campaign dismissed the report as “fake news,” while a Facebook spokesperson told Channel 4 that the tactics used by Cambridge Analytica in 2016 wouldn’t work today:
We’ll see whether the new rules help produce different results when Americans go to the polls in November.
So you’re interested in AI? Then join our online event, TNW2020 , where you’ll hear how artificial intelligence is transforming industries and businesses.