AI diversity groups snub future funding from Google over staff treatment
Google’s AI ethics drama has taken another twist. Three groups working to promote diversity in AI say they will no longer accept funding from the search giant after a series of controversial firings at the company.
Queer in AI, Black in AI, and Widening NLP cited the dismissals of Timnit Gebru and Margaret, the former co-leads of Google’s Ethical AI team, as well as recruiter April Christina Curley, as reasons for the decision.
I n a joint statement issued on Monday, the groups said Google’s actions had “inflicted tremendous harm” and “set a dangerous precedent for what type of research, advocacy, and retaliation is permissible in our community.”
Gebru was sacked in December after a conflict over a research paper she co-authored about the dangers of large language models, which are crucial components of Google’s search products.
Mitchell was fired three months later for reportedly using automated scripts to find emails showing mistreatment of Gebru, while Curley says she was terminated because the company was “tired of hearing me call them out on their racist bullshit.”
The three groups said Gebru and Mitchell’s exits had disrupted their lives and work, and also stymied the efforts of their former team. Curley’s departure, meanwhile, was described as “a step backward in recruiting and creating inclusive workplaces for Black engineers in an industry where BIPOC are marginalized and undermined.”
The groups urged Google to make the changes necessary to promote research integrity and transparency, as well as allow research that is critical of the company’s products.
They also called for the tech giant “ to emphasize work that uplifts and hires diverse voices, honors ethical principles, and respects Indigenous and minority communities’ data and sovereignty.”
None of the organizations have previously rejected funding from a corporate sponsor. Wired reports that Queer in AI received $20,000 from Google in the past year, while Widening NLP got $15,000.
The trio joins a growing number of individuals and organizations who have spurned funding from Google over the company’s treatment of staff.
Five months after Gebru’s firing, the fallout continues to harm Google’s reputation for AI research.
Nvidia teaches AI to create new version of Pac-Man just by watching gameplay
Forty years after Pac-Man had his first taste of ghosts, an AI has given the ravenous puck a new maze to chew his way through — without studying the game code.
That means it recreated the game without first being taught the rules. Instead, the AI mimics a computer game engine to create a new version of the arcade classic.
Researchers at GPU giant Nvidia recreated the game by feeding a model they call GameGAN 50,000 Pac – Man episodes. After digesting the sample, it spat out a fully-functioning new version of the game.
The system uses generative adversarial networks (GANs), which Facebook Chief AI Scientist Yann LeCun called “the most interesting idea in the last 10 years in ML.”
GANs create new content — such as photos of faces that don’t exist — by pitting two neural networks against each other: a generator that creates samples and a discriminator that examines the designs to work out whether they’re real.
GAN inventor Ian Goodfellow, a machine learning director at Apple, compares the process to a competition between counterfeiters and cops.
“Counterfeiters want to make fake money and have it look real, and police want to look at any particular bill and determine if it’s fake, ” he said.
The two networks learn from each other to create new fakes that could pass as the real thing — like a convincing Picasso forgery. Or in this case, a brand new version of a game.
Nvidia says that it’s the first time that GANs have mimicked a computer game engine.
“We wanted to see whether the AI could learn the rules of an environment just by looking at the screenplay of an agent moving through the game,” said Seung-Wook Kim, the lead author on the project. “And it did.”
From Pac-Man to self-driving cars
AI has a long history of playing video games, such as the DeepMind system that mastered StarCraft II . But Nvidia‘s research shows that it can also recreate them.
The company’s system learned to visually imitate Pac-Man by ingesting screenplay and keyboard actions from past gameplay. When the artificial agent presses a key, GameGAN generates new frames of the game environment in real-time until it’s created a new layout.
The model can generate both the static maze shape and the moving elements of Pac-Man and his hated ghosts. It also learns all of the game‘s rules, from sticking within the maze’s walls to avoiding ghosts until he munches a Power Pellet.
Nvidia says game developers could use the system to automatically generate new levels, characters, and even entirely new games. It can also separate the background from the characters so they can change the settings and the cast. If Pac-Man’s suffering from indigestion, maybe Mario could take this place and chase mushrooms around an outdoor labyrinth.
The system currently is limited to 2D games, but Nvidia plans to make it work on more complex games and simulators.
Ultimately, the company envisions using the system beyond the world of gaming. They expect that it could create simulated environments for developing warehouse robots or autonomous vehicles.
“We could eventually have an AI that can learn to mimic the rules of driving, the laws of physics, just by watching videos and seeing agents take actions in an environment,” said Sanja Fidler, the director of Nvidia‘s Toronto research lab . “GameGAN is the first step toward that.”
Even if that never happens, GameGAN has at least given Pac-Man a memorable gift for his 40th birthday. And Nvidia has invited everyone to his party on a date TBC later this year, when the game will be made available on the company’s AI Playground .
Beverly Hills cops try to weaponize Instagram’s algorithms in failed attempt to thwart live streamers
We don’t know if Beverly Hills Police Department (BHPD) Sergeant Billy Fair practices Santeria (or owns a crystal ball), but we know he listens to the Sublime song of the same name. We know this because he’s become a viral sensation on Instagram after blasting the 1990’s hit at a citizen in a misguided attempt to get Instagram’s algorithm to take down a live stream due to copyright infringement.
Ironically, the officer of the law isn’t very clear on the social media site’s rules. But we’ll get to that in a bit.
Background: This story comes to us from Vice’s Dexter Thomas via the Instagram account of one Sennett Devermont, the citizen on the receiving end of the cop’s silly attempt to game the system.
Credit: Instagram
In the post linked above, we can see video of Sergeant Fair employing the old crank up the music so you can’t hear your parents yelling at you to clean your room tactic as soon as Devermont starts asking too many questions.
Thomas’ reporting makes it clear that this isn’t an isolated incident within the Beverly Hills community, but instead both Fair and other officers have deployed this tactic. One officer is even quoted as not only having knowledge of Devermont’s IG account, but going so far as to mockingly bring up negative comments on posts involving the citizen’s interactions with BHPD.
Laws and rules: The law says that, for the most part, it’s cool for people in California to film law enforcement officers. And, despite what you might believe, Instagram’s rules say that, for the most part, it’s cool for you to post a video that has short moments of copyrighted music playing through it.
Quick take: What we have here is a case of officers of the law trying to weaponize the legal protections a private company is forced to take in order to adhere to the law, against a citizen expressing their Constitutionally-protected right to record the cops under the First Amendment.
And I’ll leave it to you to decide whether protecting police officers from the consequences of their own actions or upholding the will detailed in the US’ founding documents is more important.