Sorry former Pentagon expert, but China is nowhere near winning the AI race
Nicolas Chaillan, the Pentagon’s former Chief Software Officer, is on a whirlwind press tour to drum up as much fervor for his radical assertion that the US has already lost the AI race against China.
Speaking to the Financial Times in his first interview after leaving his post at the Pentagon, Chaillan said:
Chaillan’s departure from the Pentagon was preceded by a “blistering letter” where he signaled he was quitting out of frustration over the government’s inability to properly implement cybersecurity and artificial intelligence technologies.
And, now, he’s telling anyone who will listen that the US has already lost a war to China that hasn’t even happened yet. He’s essentially saying that the US is a sitting duck who’s safety and sanctity is predicated on the fact that China is choosing not to attack and destroy us.
And, let’s be clear, Chaillan’s not talking about a hot war. Per the FT article, he said “whether it takes some kind of war or not is anecdotal.”
This is what you call propaganda.
Here’s why:
The score: It doesn’t matter how you measure things, the US is not losing the AI race to China.
Among China‘s top AI companies you’ll find Baidu, a business with about a $55B market cap .
Let’s put that into perspective. Google is worth over a trillion dollars . That’s 18 times more than Baidu. And that’s just Google. Amazon, Apple, and Microsoft are also worth a trillion and they’re all AI companies as well.
There is no measure, including talent draw and laboratory size, by which you could say China is even in the same class when it comes to AI companies .
And when it comes to AI research coming out of universities, the US again leads the world by a grand margin.
Not only does the US attract students from all over the world, but it also houses some of the world’s most advanced AI programs at the university level. Between the cognitive research done at places like NYU and Harvard and the machine learning applications for engineering being invented at MIT, Carnegie Mellon, and their ilk, it’s incredibly difficult to make an argument that China‘s academic research outclasses the US’.
That’s not to denigrate the amazing work being done by researchers in China, but there’s certainly no reason to believe China‘s going to overtake the West in a matter of time by sheer virtue of its academia.
And that just leaves public-sector and military AI. What’s interesting about China is that, nationally, its government gives far more support for AI research than any other nation.
Many experts feel that China‘s massive investments in public sector research combined with its authoritarian approach to controlling what the public sector and academia do, could lead to a situation where China leapfrogs the US.
This, however, is conjecture. The reality is that US companies don’t need government investments. Unlike the US government, Amazon isn’t in massive debt to its shareholders. Amazon is one of the most profitable enterprises in the history of humanity.
And there’s no law saying Amazon must work with the US government. It’s free to continue making money hand over fist and pushing the philosophical limits on what wealth is or how economies work whether it chooses to play ball with the Pentagon or not.
The point is this: In China, all research is military research.
The FT article makes it apparent that Chaillan’s real problem is with democracy:
In some weird “give up your freedoms for the greater good way” his words might make sense. Except for one thing: the one major player in the global AI game that we haven’t spoken about yet is the Defense Advanced Research Projects Agency, more commonly referred to by its acronym DARPA .
DARPA is the US government’s version of the laboratory Q leads in the James Bond universe. It’s always looking for technologies – literally any technologies, no matter how strange or unlikely – to exploit for military use.
But there’s nothing fictional about DARPA or its work. Either DARPA or a DARPA-adjacent agency of similar reform is at the financial heart of thousands upon thousands of university studies and technology projects in the US every year.
For perspective: DARPA literally invented the internet, GPS, and the graphical user-interface.
I mention all of this to point out that there is no domain by which you can say the US is not leading the world in AI. I’m not saying that as a patriot (disclosure: I’m a US citizen who lives abroad and a US Navy veteran). I’m saying it because it’s demonstrably true.
In fairness, Chaillan’s clarified his words since the FT article. On LinkedIn he wrote :
Never let the truth get in the way of a good story eh? Per the FT article:
The bottom line is that Chaillan’s spreading propaganda. He’s employing a centuries-old racist trope called the “China Bogeyman.” The US has used it for decades to justify its bloated defense budget to the public.
The idea is that US citizens should be scared of China not because of its academic, economic, or military technologies. But because of the sheer fact that there are 1.5 billion people in that country who aren’t Americans.
Chaillan’s using the China Bogeyman and his former positions as an IT boss for the Air Force and the Pentagon as a political tool. Whether his goal is to run for office or a to get a lofty consulting position at a conservative-leaning organization, it’s clear what the purpose of Chaillan’s outlandish statements are: to pressure the public into believing their safety relies on doing whatever it takes to ward off the imminent threat posed by the mere existence of 1.5 billion people in China.
It’s a baseless argument against the development of ethical AI and policies restricting the US from creating and using harmful AI technologies.
Neuralink and Tesla have an AI problem that Elon’s money can’t solve
Elon Musk’s problems are bigger and more important than yours. While most of us are consumed with our day-to-day activities, Musk has been anointed by a higher power to save us all from ourselves.
He’s here to ensure we eliminate car accidents , make traffic a thing of the past , solve autism (his words, not mine), connect human brains to machines , fill the night sky with satellites so everyone can have internet access, and colonize Mars .
He doesn’t exactly know how we’re going to accomplish all those things, but he has more than enough money to turn any and every single good idea he’s ever had into a functioning industry.
Who cares if Tesla’s 10, 20, or 100 years away from actually solving the driverless car problem? Financial experts are in near-unanimous agreement that $TSLA is doing just fine with its current amount of progress.
What the scientific and machine learning communities think is usually irrelevant to the mainstream when it comes to Musk. The entire field’s views on driverless cars are usually relegated to a mumbling sentence in the next-to-last paragraph of articles about Tesla’s endeavors.
It typically goes something like this: “ some experts think these technologies may take longer to mature. ”
And who cares if Neuralink’s already missed expectations and is currently about to “invent” a brain-computer interface that’s exactly as sophisticated as the one Eberhard Fetz built back in 1969 ?
People who get the opportunity to invest in Neuralink will make money as long as Elon keeps the hype-train going. Never mind that the distant technology he claims will one day take the common BCI his company is making today and turn it into a magical telepathy machine is purely hypothetical in 2021.
The reality is that AI can’t do the things Musk needs it to do in order for Tesla and Neuralink to make good on his promises.
Here’s why:
AI’s “mapping” problem
When we talk about a mapping problem we don’t mean Google Maps. We’re referring to the idea that maps themselves can’t possible by one-to-one representations of a given area.
Any “map” automatically suffers from severe data loss. In a “real” territory, you can count every blade of grass, every pebble, and every mud puddle. On a map, you just see a tiny representation of the immense reality. Maps are useful for directions, but if you’re trying to count the number of trees on your property or determine exactly how many wolverines are hiding in a nearby thicket, they’re pretty useless.
When we train a deep learning system to “understand” something, we have to feed it data. And when it comes to massively complex tasks such as driving a car or interpreting brain waves, it’s simply impossible to have all of the data. We just sort of map out a tiny-scale approximation of the problem and hope we can scale the algorithms to task.
This is the biggest problem in AI. It’s why Tesla can use Dojo to train its algorithms in millions, billions, or trillions of iterations – giving its vehicles more driving experience than that of every human who has ever existed combined — and, yet, it still makes inexplicable mistakes .
We can all point to the statistics and shriek “Autopilot is safer than unaugmented human driving!” just like Musk does, but the fact of the matter is that humans are far safer drivers without Autopilot than Tesla’s Full Self Driving features are without a human.
Making the safest, fastest, most efficient production car in history is an incredible feat for which Musk and Tesla should be lauded. But that doesn’t mean the company is anywhere near solving driverless cars or any of the AI problems that plague the entire industry.
No amount of money is going to brute-force human-level algorithms, and Elon Musk may be the only AI “expert” who still believes deep learning-based computer vision alone is the key to self-driving vehicles.
And the exact same problem applies to Neuralink, but at a much larger scale.
Experts believe there are more than 100 billion neurons in the human brain . Despite what Elon Musk may have recently tweeted, we don’t even have a basic map of those neurons.
In fact, neuroscientists are still challenging the idea of regionalized brain activity. Recent studies indicate that different neurons light up in changing patterns even when brains access the same memories or thoughts more than once. In other words: if you perfectly map out what happens when a person thinks about ice cream, the next time they think about ice cream the old map could be completely useless.
We don’t know how to map the brain, which means we have no way of building a dataset to train AI how to interpret it.
So how do you train an AI to model brain activity? You fake it. You teach a monkey to push a button to summon food and then you teach them how to use a brain computer interface to push the button – as Fetz did back in 1969.
Then you teach an AI to interpret the whole of the monkey’s brain activity in such a way that it can tell whether the monkey was trying to push the button or not.
Keep in mind, the AI does not interpret what the monkey wants to do, it just interprets whether the button should be pushed or not.
So, you’d need a button for everything. You’d need enough test subjects wearing BCIs to generate enough generalized brainwave data to train the AI to perform every single function you desired.
The equivalent of this would be if Spotify had to build robots and teach them to play the actual instruments used to make every song on the platform.
Every time you wanted to listen to “Beat It” by Michael Jackson, you’d have to put a training request in with the robots. They’d pick up the instruments and start making absolutely random noises for thousands or millions of training hours until they “ hallucinated ” something similar to “Beat It.”
As the AI changed its version of the song, its human developers would give it feedback to indicate if it was getting closer to the original tune or further away.
Meanwhile, a semi-talented human musician could play the entire composition for just about any Michael Jackson song after only a couple of listens .
Elon’s money is no good here
Robots don’t care how rich you are. In fact, AI doesn’t care about anything because it’s just a bunch of algorithms getting smashed together with data to produce bespoke output.
People tend to assume Tesla and Neuralink are going to solve the AI problem because they have, essentially, unlimited backing.
But, as Ian Goodfellow at Apple, Yann LeCun at Facebook, and Jeff Dean at Google can all tell you: if you could solve self-driving cars, the human brain, or AGI with money, it would have already been solved.
Musk may be the richest man alive, but even his wealth doesn’t eclipse the combined worth of the biggest companies in tech .
And, what the general public doesn’t quite seem to grasp is this: Facebook, Google, and Tesla, and all the other AI companies are all working on the exact same AI problems.
When DeepMind was founded its purpose was not to win chess or Go games. It’s purpose was to create an AGI . It’s the same with GPT-3 and just about any other multimodal AI system being developed today.
When Ian Goodfellow re-invigorated the field of deep learning with his take on neural networking in 2014, he and others working on similar challenges lit a fire under the technology world.
In the time since, we’ve seen the development of multi billion-dollar neural networks that push the very limits of compute and hardware. And, even with all of that, we could still be decades or more away from self-driving cars or algorithms that can interpret human neuronal activity.
Money can’t buy a technological breakthrough (it doesn’t hurt, of course, but scientific miracles take more than funding). And, unfortunately for Tesla and Neuralink, many of the smartest, most talented AI researchers in the world know that making good on Musk’s enormous promises may be a losing endeavor .
Perhaps that’s why Musk has expanded his recruitment efforts beyond researchers with a background in AI and is now trying to lure any computer science talent he can find.
The good news is that absolutely no amount of sober evaluation can dampen the spirits of Musk’s indefatigable fans. Whether he can deliver on the goods or not has no impact on the amount of worship he receives.
And that’s as likely to change as a Tesla’s ability to produce a self-driving car or Neuralink’s ability to interpret neuron activity in human brains.
How immersive VR could help you ditch your sex therapist
A boom in new technologies is revolutionizing the field of mental health in terms of understanding and treating mental disorders like phobias, eating disorders, or psychosis. Among these innovations, virtual reality (VR) is a powerful tool that provides individuals with new learning experiences, increasing their psychological well-being.
Immersive VR creates interactive computer-generated worlds that expose users to sensory perceptions that mimic those experienced in the “real” world.
People have found new ways to satisfy their sexual and emotional needs through technology; examples include virtual or augmented reality, teledildonics (sex toys that can be controlled through the internet) , and dating apps. However, research on the use of VR in sex therapy is still in its infancy .
Sexual aversion is the experience of fear, disgust, and avoidance when exposed to sexual cues and contexts . A Dutch study published in 2006 found that sexual aversion affects up to 30 percent of individuals at some point in their lives. And a recent Québec-based survey of 1,933 people conducted by our laboratory revealed that at least six percent of women and three percent of men have experienced sexual aversion in the last six months.
This data suggest that sexual aversion is as common as depression and anxiety disorders .
Exposure and sexual aversion
Difficulties in experiencing sexuality with pleasure, whether solo or partnered, are at the heart of sexual aversion. Recovering from such difficulties involves changing one’s thoughts, reactions, and behaviors in sexual and romantic situations by, for instance, gradually exposing oneself to apprehensive sexual contexts.
Recent findings suggest that VR could bring about such changes in real-life situations, particularly in individuals with poor sexual functioning or with a history of sexual trauma . Our own findings, not yet published, show that VR can help with intimacy-related fears and anxiety .
Immersive and realistic computer-generated worlds in VR could lead to positive sexual health outcomes such as increased pleasure and sexual well-being by alleviating psychological distress in sexual contexts.
Treatment for sexual aversion involves controlled, progressive, and repeated exposure to anxiety-provoking sexual contexts. These exposures aim to gradually reduce fear and sexual avoidance, two common reactions to sexual cues in sexually aversive individuals.
With this objective in mind, VR offers an ideal and ethical medium for intervention, as simulations can be tailored to different levels of sexual explicitness and be repeatedly experienced, even for sexual contexts that would be impossible or unsafe to recreate in real life or in therapy settings.
For instance, situations commonly feared by individuals with sexual aversions, such as sexual assault, failure or rejection, or feeling trapped in a sexual encounter, do not actually happen in VR . VR would not only allow them to overcome fears but also to learn new sexual skills to use in real-world situations — skills that would otherwise be difficult, if not impossible, to develop. Individuals in treatment could then apply these learnings to real-world intimate situations.
Further, although people’s minds and bodies behave as though the virtual environment in which they are immersed is real , individuals are more willing to face difficult situations in VR than in the real world because they are aware that the former is fictional, and therefore safer.
Treating sexual aversion
In December 2020, we collected data that allowed us to compare sexually aversive and non-aversive individuals. Participants were immersed in a virtual environment simulating a typical intimate interaction, which involved a fictional character engaging in sexual behaviors throughout six scenes. Throughout the scenes, participants were gradually exposed to the character’s flirting, nudity, masturbation, and orgasm. Our findings suggest VR could represent a promising avenue for treating sexual aversion.
Sexually aversive and avoidant individuals reported more disgust and anxiety than non-aversive participants in response to the simulation. And the more the scenes were sexually explicit, the higher the participants’ levels of disgust and anxiety. These results suggest that the virtual environment adequately replicated real-life contexts that would typically induce sexual aversion.
Future/futuristic treatments
Treatment options for people with sexual aversion could include exposure to tailored and diverse sexual contexts — for example, rejection, intercourse, sexual communication, attempted assault — through VR. This could help to alleviate distress and support positive and rewarding erotic encounters in real-life settings.
Applications of VR in sex therapy will be profoundly shaped by advancements in artificial intelligence. Hence, using erobots (artificial erotic agents) in virtual interactive environments to simulate realistic romantic and erotic encounters, which are often avoided by sexually aversive people. Virtual agents could also be used to develop sexual skills, explore sexual preferences, and get reacquainted with one’s body and sexuality.
As VR can be used outside the therapist’s office, it could be included in self-treatment programs for sexual difficulties. With high-quality and affordable VR equipment entering the consumer market, future therapeutic VR protocols in sex therapy could be used in the comfort and privacy of one’s own home, promoting autonomy and improving access to treatment.
Article by David Lafortune , Professor, Departement of sexology, Université du Québec à Montréal (UQAM) ; Éliane Dussault , Candidate au doctorat en sexologie, Université du Québec à Montréal (UQAM) , and Valerie A. Lapointe , PhD student in psychology, Université du Québec à Montréal (UQAM)
This article is republished from The Conversation under a Creative Commons license. Read the original article .