Inside the world’s first AI-powered satellite — and its fight against clouds
On September 2 in French Guiana, an AI satellite was launched into the Earth’s orbit for the first time in history.
PhiSat-1 is now soaring at over 17,000 mph about 329 miles above us, monitoring polar ice and soil moisture through a hyperspectral-thermal camera, while also testing inter-satellite communication systems.
Onboard the small satellite is an AI system developed by Ubotica and powered by Intel‘s Myriad 2 VPU — the same chip inside many smart cameras, Magic Leap’s AR goggles, and a $99 selfie drone. Its first task is filtering out images of clouds that impede the analysis.
Clouds cover around two-thirds of Earth’s surface at any given moment, which can severely disrupt the system’s analysis.
“Currently, in every satellite bar PhiSat-1, those images are all stored and then downlinked to the earth, because there’s no way of knowing the value of the data until it’s on the ground, and some human can look over them and say that’s valuable and that’s not,” Aubrey Dunne, chief technology officer of Ubotica, tells TNW.
“In the case of PhiSat-1, the AI is doing data reduction by saying there’s a lot of cloud in an image, there’s no point in storing it on the satellite and then downlinking it to the ground for someone on the ground to say that’s just cloudy, throw it away.”
The PhiSat-1 team estimates that the onboard processing will cut bandwidth by about 30% of bandwidth and save a lot of time for scientists on the ground.
“This is a huge saving, and this really is just the low hanging fruit for AI,” says Gianluca Furano, data systems and onboard computing lead at the European Space Agency (ESA), which is leading the project. “It isn’t really even our primary objective.”
Other big impacts the system could have are reducing latency and increasing autonomy. AI could quickly detect fires in images and then alert the relevant authorities of the location and size of the blaze, or help Martian rovers avoid craters without first communicating with analysts back on Earth.
Furano says current Martian rovers can hypothetically run some hundred meters per day. But in reality, the need to relay imaging information to and from Earth keeps their range to just one or two meters per day.
“If we can get Martian rovers moving at 10 or 20 times the speed, the amount of science that can be done is enormous,” he says.
Next steps for AI in space
PhiSat-1 is currently in a commissioning phase in which all the system’s elements are being tested. But the team has already got some early results : the first AI inference performed in-orbit.
When the commissioning period ends, a year-long operational phase will begin, with the satellite targeting certain locations on the globe and capturing, processing, and downlinking the relevant image data.
In the future, the AI model onboard could be adapted to perform a range of other tasks. Dunne believes this could lay the foundations for a “satellite as a service” model in which a satellite is deployed with AI hardware onboard that’s rented out for different uses.
“Ultimately, we think it’ll probably get to that point,” he says. “It won’t be tomorrow, but PhiSat-1 will start to prove that this is possible.”
UK bans Huawei from 5G network, citing security risks triggered by US sanctions
Britain has announced it’s banning Huawei from the country’s 5G network, mere months after agreeing to keep the firm as a supplier.
Telecom company will have to strip all Huawei gear from their networks by 2027, and will be barred from buying any more of its 5G tech from the start of next year.
Digital Secretary Oliver Dowden said the move would cost up to $2.5bn and delay the UK‘s 5G rollout by two-to-three years.
“This has not been an easy decision, but it is the right one for the UK telecoms networks, for our national security and our economy, both now and indeed in the long run,” he said. “By the time of the next election, we will have implemented in law, an irreversible path for the complete removal of Huawei equipment from our 5G networks.”
The decision follows months of pressure from the Trump administration. The White House claims Huawei could use the 5G network to help the Chinese government conduct cyber espionage, and has imposed a series of restrictions on the Shenzen-based firm. Huawei has consistently denied allegations of ties to the Chinese state, arguing that the dispute is about trade rather than security.
Prime Minister Boris Johnson had previously sought to placate both superpowers while preserving the UK‘s 5G infrastructure. In January, he agreed to keep Huawei in the system, but barred it from the sensitive “core” of the networks. Tuesday’s U-turn comes after the National Cyber Security Centre said it can no longer guarantee the security of Huawei gear, due to the US banning the firm from using American microchips.
“Following US sanctions against Huawei and updated technical advice from our cyber experts, the government has decided it necessary to ban Huawei from our 5G networks,” said Dowden .
The move will certainly please the White House, but it will also raise the ire of the Chinese government.
Did Elon Musk forget about OpenAI or is he just trolling his dumbest fans?
It’s impossible to tell if Elon Musk is serious about anything anymore. His image exists in a dichotomy between cartoonesque and genius. He’s simultaneously the richest man in the world and the patron saint of meme war veterans and shitcoin shillers. And he’s also a brilliant engineer and one of the most talented technologists in generations. All of this combines to make him, potentially, the world’s greatest social media troll.
His brand is excruciatingly simple, yet perfectly executed: no matter what he says, you’re always left wondering if there’s an “lol j/k” coming. There almost never is, especially when there should be.
Take this tweet for example:
Artificial general intelligence (AGI) is the holy grail of computer science. Essentially, an AGI would be capable of anything a human is, given the same access.
But it can be incredibly difficult to discern the difference between “human-level” and a really good parlor trick.
Here’s an easy way to tell: it’s all parlor tricks.
AI doesn’t happen on accident
When we think about OpenAI’s GPT-3’s ability to write original code or create cogent text articles, for example, it starts to feel like AI is becoming incredibly human-like.
But the ability to generate text or auto-generate computer code is only impressive at scale, and even then it’s not that cool.
GPT-3 is imitating human-like behavior, but it’s not acting like a human. In essence, GPT-3 is no different than a parrot.
If you want a parrot to say “Polly wants a cracker” you have to repeat that phrase over and over and reward it with a cracker. If you want it to say “Polly wants a hand grenade” you’ll still have to give it a cracker. That’s because Polly doesn’t know what the words mean, it just wants a cracker and is really good at imitating noises.
GPT-3 is like a trillion parrots, all trained to say a unique phrase, waiting for their trigger word all at once. Except parrots are individually smarter than GPT3, which is in turn much faster.
To put this into perspective, let’s talk about Tesla‘s current AI capabilities and what it would actually mean for it or any other company to successfully solve and build an AGI.
Musk claims that Tesla’s “Dojo” could be the world’s most powerful supercomputer . It’s a bespoke system dedicated solely to training Tesla’s AI systems.
That probably sounds pretty amazing to Tesla fans. The world’s most powerful supercomputer was built just to train the world’s most advanced cars to drive themselves!
AI isn’t alchemy
But no amount of computer power can force a narrow AI to become a general one. Just like there’s no combination of math and wizardry that can legitimately turn lead into gold (sans a particle accelerator ), there’s no combination of deep learning algorithms at scale that can suddenly brute-force an AI capable of human-level intelligence.
When you put the media hyperbole surrounding Dojo and Elon’s hype together, you get a bunch of people who are convinced that Tesla’s AI system are becoming more intelligent at an exponential rate. They believe their cars work just like RPG characters. If Dojo gets enough experience it will level up and become even more powerful!
If this were true, then it’s only a matter of time before Tesla’s AI becomes the most powerful entity on the planet.
And if you believe that’s what’s happening at Tesla, you’re out of touch with reality. That’s not how machine learning works.
The only people who truly believe that Tesla is legitimately going to surpass Google, Apple, MIT, Meta, Nvidia, IBM, and Microsoft when it comes to AI are people who’re either financially invested in the company or emotionally invested in Musk.
Tesla is late to the party
Think about it: Boston Dynamics has been working on making useful robots for decades. Many of the people responsible for creating the foundational algorithms and neural networks that Tesla is using are currently working for Facebook, IBM, Google, Nvidia, and Apple.
The sheer number of companies working in the robotics, autonomy, and machine learning space is staggering. And Tesla is far from being a leader in the AI space. Where IBM and Google have thousands of AI products on the market that help businesses and governments do everything from automating transportation and shipping logistics to providing real-time battlefield analysis and command and control capabilities, Tesla and SpaceX have a handful of narrow systems.
And that’s what makes Musk’s claims that Tesla “could” contribute to AGI all the more ludicrous. He may as well be saying that a power surge during a thunderstorm could contribute to a military robot becoming sentient .
AGI literally stands for “artificial general intelligence.” In the computer science world “general” means the opposite of “narrow.”
A “narrow” intelligence, such as Tesla’s Full Self Driving, is only capable of performing a specific set of functions. A general intelligence would be capable of performing any feat a human can, given the same access.
We’re essentially talking about the difference between a robotic arm that tightens screws and C3PO from Star Wars.
Credit where it’s due
That’s not to say what Musk’s companies do with AI isn’t incredible. How can you not be stunned by Tesla’s ability to brute force level 2 autonomy using a vision-only approach?
And seriously, anyone who doesn’t get chills when they see a SpaceX rocket land itself probably doesn’t understand how hard it is to pull off such a complex feat.
But it’s ridiculous to claim that Tesla might accidentally back into AGI because it’s systems “train against the outside world.” Alexa trains against the outside world as well. Does Musk think it’s going to magically morph into an AGI on accident too?
And, laugh-out-loud, Elon Musk has already built an AI company solely dedicated to AGI. It’s OpenAI, the one that makes GPT-3! He co-founded it for crying out loud.
Did he forget about that?
Musk’s almost certainly trolling his fans. Especially the ones who appear to actually believe that Tesla is somehow the only company to ever come up with the idea of creating a domestic robot.
A tale of two Elons
Maybe I’m wrong. Maybe Elon’s genius will strike again and he’ll shock us all. But it’s more likely his grifter side is drumming up “ooohs” and “ahhs” to help make it look like the recent price hike to Full Self Driving is an investment in something bigger than it really is.
Unfortunately, if you want the Elon Musk who made PayPal a success, pushed Tesla to become the greatest car manufacturer in the world, and revolutionized space flight, you also have to take the one who’s his own shill too.
Just remember, his predictions are traditionally more outlandish than Ray Kurzweil’s and Miss Cleo’s combined.
Here’s one from 2017:
Here’s one from 2020:
At the end of the day, you don’t have to take my word for it. If you think Musk’s right, feel free to order yourself a Tesla robotaxi , hop on your local high-speed autonomous hyperloop , and go buy a Tesla Optimus bot .