Inside China’s plan to lead the world in AI

China announced in 2017 its ambition to become the world leader in artificial intelligence (AI) by 2030. While the US still leads in absolute terms, China appears to be making more rapid progress than either the US or the EU, and central and local government spending on AI in China is estimated to be in the tens of billions of dollars.

The move has led – at least in the West – to warnings of a global AI arms race and concerns about the growing reach of China’s authoritarian surveillance state . But treating China as a “ villain ” in this way is both overly simplistic and potentially costly. While there are undoubtedly aspects of the Chinese government’s approach to AI that are highly concerning and rightly should be condemned, it’s important that this does not cloud all analysis of China’s AI innovation.

The world needs to engage seriously with China’s AI development and take a closer look at what’s really going on. The story is complex and it’s important to highlight where China is making promising advances in useful AI applications and to challenge common misconceptions, as well as to caution against problematic uses.

Nesta has explored the broad spectrum of AI activity in China – the good, the bad and the unexpected.

The good

China’s approach to AI development and implementation is fast-paced and pragmatic, oriented towards finding applications which can help solve real-world problems . Rapid progress is being made in the field of healthcare , for example, as China grapples with providing easy access to affordable and high-quality services for its ageing population.

Applications include “AI doctor” chatbots , which help to connect communities in remote areas with experienced consultants via telemedicine; machine learning to speed up pharmaceutical research ; and the use of deep learning for medical image processing , which can help with the early detection of cancer and other diseases.

Since the outbreak of COVID-19, medical AI applications have surged as Chinese researchers and tech companies have rushed to try and combat the virus by speeding up screening, diagnosis and new drug development. AI tools used in Wuhan, China, to tackle COVID-19 – by helping accelerate CT scan diagnosis – are now being used in Italy and have been also offered to the NHS in the UK .

The bad

But there are also elements of China’s use of AI which are seriously concerning. Positive advances in practical AI applications which are benefiting citizens and society don’t detract from the fact that China’s authoritarian government is also using AI and citizens’ data in ways that violate privacy and civil liberties.

Most disturbingly, reports and leaked documents have revealed the government’s use of facial recognition technologies to enable the surveillance and detention of Muslim ethnic minorities in China’s Xinjiang province.

The emergence of opaque social governance systems which lack accountability mechanisms are also a cause for concern.

In Shanghai’s “smart court” system, for example, AI-generated assessments are used to help with sentencing decisions. But it is difficult for defendants to assess the tool’s potential biases, the quality of the data and the soundness of the algorithm, making it hard for them to challenge the decisions made.

China’s experience reminds us of the need for transparency and accountability when it comes to AI in public services. Systems must be designed and implemented in ways that are inclusive and protect citizens’ digital rights.

The unexpected

Commentators have often interpreted the State Council’s 2017 Artificial Intelligence Development Plan as an indication that China’s AI mobilisation is a top-down, centrally planned strategy.

But a closer look at the dynamics of China’s AI development reveals the importance of local government in implementing innovation policy. Municipal and provincial governments across China are establishing cross-sector partnerships with research institutions and tech companies to create local AI innovation ecosystems and drive rapid research and development.

Beyond the thriving major cities of Beijing, Shanghai and Shenzhen, efforts to develop successful innovation hubs are also underway in other regions. A promising example is the city of Hangzhou, in Zhejiang Province, which has established an “AI Town”, clustering together the tech company Alibaba, Zhejiang University and local businesses to work collaboratively on AI development. China’s local ecosystem approach could offer interesting insights to policymakers in the UK aiming to boost research and innovation outside the capital and tackle longstanding regional economic imbalances .

China’s accelerating AI innovation deserves the world’s full attention, but it is unhelpful to reduce all the many developments into a simplistic narrative about China as a threat or a villain. Observers outside China need to engage seriously with the debate and make more of an effort to understand – and learn from – the nuances of what’s really happening.

This article is republished from The Conversation by Hessy Elliott , Researcher, Nesta under a Creative Commons license. Read the original article .

Can AI actually understand what we are saying? Scientists are divided

This article is part of “ the philosophy of artificial intelligence ,” a series of posts that explore the ethical, moral, and social implications of AI today and in the future.

If a computer gives you all the right answers, does it mean that it is understanding the world as you do? This is a riddle that artificial intelligence scientists have been debating for decades. And discussions of understanding, consciousness, and true intelligence are resurfacing as deep neural networks have spurred impressive advances in language-related tasks.

Many scientists believe that deep learning models are just large statistical machines that map inputs to outputs in complex and remarkable ways. Deep neural networks might be able to produce lengthy stretches of coherent text, but they don’t understand abstract and concrete concepts in the way that humans do.

Other scientists beg to differ. In a lengthy essay on Medium , Blaise Aguera y Arcas, an AI scientist at Google Research, argues that large language models—deep learning models that have been trained on very large corpora of text—have a great deal to teach us about “the nature of language, understanding, intelligence, sociality, and personhood.”

Large language models

Large language models have gained popularity in recent years thanks to the convergence of several elements:

1-Availability of data : There are enormous bodies of online text such as Wikipedia, news websites, and social media that can be used to train deep learning models for language tasks.

2-Availability of compute resources : Large language models comprise hundreds of billions of parameters and require expensive computational resources for training. As companies such as Google, Microsoft, and Facebook have become interested in the applications of deep learning and large language models, they have invested billions of dollars into research and development in the field.

3-Advances in deep learning algorithms : Transformers, a deep learning architecture that was introduced in 2017, has been at the heart of recent advances in natural language processing and generation (NLP/NLG).

One of the great advantages of Transformers is that they can be trained through unsupervised learning on very corpora of unlabeled text. Basically, what a Transformer does is take a string of letters (or another type of data) as input and predict the next letters in the sequence. It can be a question followed by the answer, a headline followed by an article, or a prompt by a user in a chat conversation.

Recurrent neural networks (RNN) and long short-term memory networks (LSTM), the predecessors to Transformers, were notoriously bad at maintaining their coherence over long sequences. But Transformer-based language models such as GPT-3 have shown impressive performance in article-length output , and they are less prone to the logical mistakes that other types of deep learning architectures make (though they still have their own struggles with basic facts ). Moreover, recent years have shown that the performance of language models improves with the size of the neural network and training dataset .

In his essay, Aguera y Arcas explores the potential of large language models through conversations with LaMDA, an improved version of Google’s Meena chatbot .

Aguera y Arcas shows through various examples that LaMDA seems to handle abstract topics such as social relationships and questions that require intuitive knowledge about how the world works. For example, if you tell it “I dropped the bowling ball on the bottle and it broke,” it shows in subsequent exchanges that it knows that the bowling ball broke the bottle. You might guess that the language model will associate “it” with the second noun in the phrase. But then Aguera y Arcas makes a subtle change to the sentence and writes, “I dropped the violin on the bowling ball and it broke,” and this time, LaMDA associates “it” with violin, the lighter and more fragile object.

Other examples show the deep learning model engaging in imaginary conversations, such as what is its favorite island, even though it doesn’t even have a body to physically travel and experience the island. It can talk extensively about its favorite smell, even though it doesn’t have an olfactory system to experience smell.

Does AI need a sensory experience?

In his article, Aguera y Arcas refutes some of the key arguments that are being made against understanding in large language models.

One of these arguments is the need for embodiment . If an AI system doesn’t have a physical presence and cannot sense the world in a multimodal system as humans do, then its understanding of human language is incomplete. This is a valid argument. Long before children learn to speak, they develop complicated sensing skills. They learn to detect people, faces, expressions, objects. They learn about space, time, and intuitive physics. They learn to touch and feel objects, smell, hear, and create associations between different sensory inputs. And they have innate skills that help them navigate the world. Children also develop “theory of mind” skills, where they can think about the experience that another person or animal is having, even before they learn to speak. Language builds on top of all this innate and obtained knowledge and the rich sensory experience that we have.

But Aguera y Arcas argues, “Because learning is so fundamental to what brains do, we can, within broad parameters, learn to use whatever we need to. The same is true of our senses, which ought to make us reassess whether any particular sensory modality is essential for rendering a concept ‘real’— even if we intuitively consider such a concept tightly bound to a particular sense or sensory experience.”

And then he brings examples from the experiences of blind and deaf people, including the famous 1929 essay by Helen Keller, who was born both blind and deaf, titled, “I Am Blind — Yet I see; I Am Deaf — Yet I Hear”:

“I have a color scheme that is my own… Pink makes me think of a baby’s cheek, or a gentle southern breeze. Lilac, which is my teacher’s favorite color, makes me think of faces I have loved and kissed. There are two kinds of red for me. One is the red of warm blood in a healthy body; the other is the red of hell and hate.”

From this, Aguera y Arcas concludes that language can help fill the sensory gap between humans and AI.

“While LaMDA has neither a nose nor an a priori favorite smell (just as it has no favorite island, until forced to pick one), it does have its own rich skein of associations, based, like Keller’s sense of color, on language, and through language, on the experiences of others,” he writes.

Aguera y Arcas further argues that thanks to language, we have access to socially learned aspects of perception that make our experience far richer than raw sensory experience.

Sequence learning

In his essay, Aguera y Arcas argues that sequence learning is the key to all the complex capabilities that are associated with big-brained animals—especially humans—including reasoning, social learning, theory of mind, and consciousness.

“As anticlimactic as it sounds, complex sequence learning may be the key that unlocks all the rest. This would explain the surprising capacities we see in large language models — which, in the end, are nothing but complex sequence learners,” Aguera y Arcas writes. “Attention, in turn, has proven to be the key mechanism for achieving complex sequence learning in neural nets — as suggested by the title of the paper introducing the Transformer model whose successors power today’s LLMs: Attention is all you need.”

This is an interesting argument because sequence learning is in fact one of the fascinating capacities of organisms with higher-order brains . This is nowhere more evident than in humans, where we can learn very long sequences of actions that yield long-term rewards.

And he’s also right about sequence learning in large language models. At their core, these neural networks are made to map one sequence to another, and the bigger they get, the longer the sequences they can read and generate. And the key innovation behind Transformers is the attention mechanism, which helps the model focus on the most important parts of its input and output sequences. These attention mechanisms help the Transformers handle very large sequences with much less memory requirements than their predecessors.

Are we just a jumble of neurons?

While artificial neural networks are working on a different substrate than their biological counterparts, they are in effect performing the same kind of functions, Aguera y Arcas argues in his essay. Even the most complicated brain and nervous system are composed of simple components that collectively create the intelligent behavior that we see in humans and animals. Aguera y Arcas describes intelligent thought as “a mosaic of simple operations” that, when studied up close, disappear into its mechanical parts.

Of course, the brain is so complex that we don’t have the capacity to understand how every single component works by itself and in connection with others. And even if we ever do, some of its mysteries will probably continue to elude us. And the same can be said of large language models, Aguera y Arcas says.

“In the case of LaMDA, there’s no mystery as to how the machine works at a mechanical level, in that the whole program can be written in a few hundred lines of code; but this clearly doesn’t confer the kind of understanding that demystifies interactions with LaMDA. It remains surprising to its own makers, just as we’ll remain surprising to each other even when there’s nothing left to learn about neuroscience,” he writes.

From here, he concludes that it is unfair to dismiss language models as not intelligent because they are not conscious like humans and animals. What we consider “consciousness” and “agency” in humans and animals, Aguera y Arcas argues, is in fact the mysterious parts of the brain and nervous system that we don’t yet understand.

“Like a person, LaMDA can surprise us, and that element of surprise is necessary to support our impression of personhood. What we refer to as ‘free will’ or ‘agency’ is precisely this necessary gap in understanding between our mental model (which we could call psychology) and the zillion things actually taking place at the mechanistic level (which we could call computation). Such is the source of our belief in our own free will, too,” he writes.

So basically, while large language models don’t work like the human brain, it is fair to say that they have their own kind of understanding of the world, purely through the lens of word sequences and their relations with each other.

Counterarguments

Melanie Mitchell, Davis Professor of Complexity at the Santa Fe Institute, provides interesting counterarguments to Aguera y Arcas’s article in a short thread on Twitter.

While Mitchell agrees that machines may one day understand language, current deep learning models such as LaMDA and GPT-3 are far from that level.

Last year, Mitchell wrote a paper in AI Magazine on the struggles of AI to understand situations . More recently, she wrote an essay in Quanta Magazine that explores the challenges of measuring understanding in AI.

“The crux of the problem, in my view, is that understanding language requires understanding the world, and a machine exposed only to language cannot gain such an understanding,” Mitchell writes.

Mitchell argues that when humans process language, they use a lot of knowledge that is not explicitly written down in the text. So, there’s no way for AI to understand our language without being endowed with that kind of infrastructural knowledge. Other AI and linguistics experts have made similar arguments on the limits of pure neural network–based systems that try to understand language through text alone.

Mitchell also argues that contrary to Aguera y Arcas’s argument, the quote from Hellen Keller proves that sensory experience and embodiment are in fact important to language understanding.

“[To] me, the Keller quote shows how embodied her understanding of color is — she maps color concepts to odors, tactile sensations, temperature, etc.,” Mitchell writes.

As for attention, Mitchell says that “attention” in neural networks as mentioned in Aguera y Acras’s article is very different from what we know about attention in human cognition, a point that she has elaborated on in a recent paper titled “ Why AI is Harder Than We Think .”

But Mitchell commends Aguera y Acras’s article as “thought-provoking” and underlines that the topic is important, especially “as companies like Google and Microsoft deploy their [large language models] more and more into our lives.”

This article was originally published by Ben Dickson on TechTalks , a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech, and what we need to look out for. You can read the original article here .

This AI tool traced the development of Pizzagate. Next up? QAnon

The audio on the otherwise shaky body camera footage is unusually clear. As police officers search a handcuffed man who moments before had fired a shot inside a pizza parlor, an officer asks him why he was there. The man says to investigate a pedophile ring. Incredulous, the officer asks again. Another officer chimes in, “Pizzagate. He’s talking about Pizzagate.”

In that brief, chilling interaction in 2016, it becomes clear that conspiracy theories, long relegated to the fringes of society, had moved into the real world in a very dangerous way.

Conspiracy theories, which have the potential to cause significant harm , have found a welcome home on social media , where forums free from moderation allow like-minded individuals to converse. There they can develop their theories and propose actions to counteract the threats they “uncover.”

But how can you tell if an emerging narrative on social media is an unfounded conspiracy theory? It turns out that it’s possible to distinguish between conspiracy theories and true conspiracies by using machine learning tools to graph the elements and connections of a narrative. These tools could form the basis of an early warning system to alert authorities to online narratives that pose a threat in the real world.

The culture analytics group at the University of California, which I and Vwani Roychowdhury lead, has developed an automated approach to determining when conversations on social media reflect the telltale signs of conspiracy theorizing. We have applied these methods successfully to the study of Pizzagate , the COVID-19 pandemic and anti-vaccination movements . We’re currently using these methods to study QAnon .

Collaboratively constructed, fast to form

Actual conspiracies are deliberately hidden, real-life actions of people working together for their own malign purposes. In contrast, conspiracy theories are collaboratively constructed and develop in the open.

Conspiracy theories are deliberately complex and reflect an all-encompassing worldview. Instead of trying to explain one thing, a conspiracy theory tries to explain everything, discovering connections across domains of human interaction that are otherwise hidden – mostly because they do not exist.

While the popular image of the conspiracy theorist is of a lone wolf piecing together puzzling connections with photographs and red string, that image no longer applies in the age of social media. Conspiracy theorizing has moved online and is now the end-product of a collective storytelling . The participants work out the parameters of a narrative framework: the people, places and things of a story and their relationships.

The online nature of conspiracy theorizing provides an opportunity for researchers to trace the development of these theories from their origins as a series of often disjointed rumors and story pieces to a comprehensive narrative. For our work, Pizzagate presented the perfect subject.

Pizzagate began to develop in late October 2016 during the runup to the presidential election. Within a month, it was fully formed, with a complete cast of characters drawn from a series of otherwise unlinked domains: Democratic politics, the private lives of the Podesta brothers, casual family dining and satanic pedophilic trafficking. The connecting narrative thread among these otherwise disparate domains was the fanciful interpretation of the leaked emails of the Democratic National Committee dumped by WikiLeaks in the final week of October 2016.

AI narrative analysis

We developed a model – a set of machine learning tools – that can identify narratives based on sets of people, places and things and their relationships. Machine learning algorithms process large amounts of data to determine the categories of things in the data and then identify which categories particular things belong to.

We analyzed 17,498 posts from April 2016 through February 2018 on the Reddit and 4chan forums where Pizzagate was discussed. The model treats each post as a fragment of a hidden story and sets about to uncover the narrative. The software identifies the people, places and things in the posts and determines which are major elements, which are minor elements and how they’re all connected.

The model determines the main layers of the narrative – in the case of Pizzagate, Democratic politics, the Podesta brothers, casual dining, satanism and WikiLeaks – and how the layers come together to form the narrative as a whole.

To ensure that our methods produced accurate output, we compared the narrative framework graph produced by our model with illustrations published in The New York Times . Our graph aligned with those illustrations, and also offered finer levels of detail about the people, places and things and their relationships.

Sturdy truth, fragile fiction

To see if we could distinguish between a conspiracy theory and an actual conspiracy, we examined Bridgegate , a political payback operation launched by staff members of Republican Gov. Chris Christie’s administration against the Democratic mayor of Fort Lee, New Jersey.

As we compared the results of our machine learning system using the two separate collections, two distinguishing features of a conspiracy theory’s narrative framework stood out.

First, while the narrative graph for Bridgegate took from 2013 to 2020 to develop, Pizzagate’s graph was fully formed and stable within a month. Second, Bridgegate’s graph survived having elements removed, implying that New Jersey politics would continue as a single, connected network even if key figures and relationships from the scandal were deleted.

The Pizzagate graph, in contrast, was easily fractured into smaller subgraphs. When we removed the people, places, things and relationships that came directly from the interpretations of the WikiLeaks emails, the graph fell apart into what in reality were the unconnected domains of politics, casual dining, the private lives of the Podestas and the odd world of satanism.

In the illustration below, the green planes are the major layers of the narrative, the dots are the major elements of the narrative, the blue lines are connections among elements within a layer and the red lines are connections among elements across the layers. The purple plane shows all the layers combined, showing how the dots are all connected. Removing the WikiLeaks plane yields a purple plane with dots connected only in small groups.

Early warning system?

There are clear ethical challenges that our work raises. Our methods, for instance, could be used to generate additional posts to a conspiracy theory discussion that fit the narrative framework at the root of the discussion. Similarly, given any set of domains, someone could use the tool to develop an entirely new conspiracy theory.

However, this weaponization of storytelling is already occurring without automatic methods, as our study of social media forums makes clear. There is a role for the research community to help others understand how that weaponization occurs and to develop tools for people and organizations who protect public safety and democratic institutions.

Developing an early warning system that tracks the emergence and alignment of conspiracy theory narratives could alert researchers – and authorities – to real-world actions people might take based on these narratives. Perhaps with such a system in place, the arresting officer in the Pizzagate case would not have been baffled by the gunman’s response when asked why he’d shown up at a pizza parlor armed with an AR-15 rifle.

This article is republished from The Conversation by Timothy R. Tangherlini , Professor of Danish Literature and Culture, University of California, Berkeley under a Creative Commons license. Read the original article .

Leave A Comment