Scientists say Jupiter was responsible for the asteroid that killed the dinosaurs
One of the galaxy‘s oldest whodunits may have finally been solved thanks to some simulation-powered super sleuthing from Harvard University. In this case, we may now know the origin of the Chicxulub crater .
Experts have long believed the majority of dinosaur species went extinct after a massive asteroid, responsible for the Chicxulub crater near Mexico, impacted the Earth some 66 million years ago. But a new theory indicates the particular asteroid scientists believe ended the reign of Rex wasn’t a solitary local stone, but a scrap from a much larger body originating in the outskirts of the the solar system. More, they also believe it only managed to impact Earth after Jupiter interfered with its originally harmless trajectory.
In other words: Jupiter saw an opportunity to cast the first stone and it did. The gas giant‘s gravitational pull was, according to Harvard simulations, enough to knock the comet off course and send it hurtling towards Earth. Before impact, the original chunk splintered and, thankfully, only a tiny piece managed to hit our planet. That “tiny” piece was somewhere around 80km wide and it left a crater about 20 or 30km deep — basically it’s as if the entire city of Boston was hurtled at the ocean near Mexico from space.
The impact wreaked havoc on sea level, sent tsunamis torrenting about, caused wildfires, and enshrouded the Earth in an atmosphere of soot and precipitation. The results of this global catastrophe included the extinction of most of the large lizard creatures and the end of the dinosaur era.
But where did the asteroid come from? Scientists of yesteryear believed it must have come from a belt between Jupiter and Mars, however new research indicates that’s highly unlikely. Due to the chemical makeup of impact deposits found in the crater where that and other asteroids of similar or greater size have struck, scientists believe it’s probable they originated in the Oort Cloud, a distant band of planetismals that resides at the edges of our solar system.
The Harvard team figured this all out by testing their theory against computer simulations to understand the path such an asteroid would have to take to impact our planet.
Per a university press release :
Quick take: Jupiter’s a jerk. If it could have kept its gravity to itself 66 million years ago, we’d be riding pterodactyls to work and sliding down a giant brontosaurus to hit the parking lot after quitting time. Then again, considering that we’ll never know how many asteroids our giant gassy sibling-planet’s saved us from by being both girthy and in the way… maybe we should just be grateful we’ve had a long enough window inbetween extinction-level-impacts for the human race to propagate and thrive.
You can find the team’s study linked here .
UN fails to agree on ‘killer robot’ ban — get ready for the autonomous weapons race
Autonomous weapon systems – commonly known as killer robots – may have killed human beings for the first time ever last year, according to a recent United Nations Security Council report on the Libyan civil war . History could well identify this as the starting point of the next major arms race, one that has the potential to be humanity’s final one.
The United Nations Convention on Certain Conventional Weapons debated the question of banning autonomous weapons at its once-every-five-year review meeting in Geneva Dec. 13-17, 2021, but didn’t reach a consensus on a ban . Established in 1983, the convention has been updated regularly to restrict some of the world’s cruelest conventional weapons, including land mines, booby traps, and incendiary weapons.
Autonomous weapon systems are robots with lethal weapons that can operate independently, selecting and attacking targets without a human weighing in on those decisions. Militaries around the world are investing heavily in autonomous weapons research and development. The U.S. alone budgeted US$18 billion for autonomous weapons between 2016 and 2020.
Meanwhile, human rights and humanitarian organizations are racing to establish regulations and prohibitions on such weapons development. Without such checks, foreign policy experts warn that disruptive autonomous weapons technologies will dangerously destabilize current nuclear strategies, both because they could radically change perceptions of strategic dominance, increasing the risk of preemptive attacks , and because they could be combined with chemical, biological, radiological and nuclear weapons themselves.
As a specialist in human rights with a focus on the weaponization of artificial intelligence , I find that autonomous weapons make the unsteady balances and fragmented safeguards of the nuclear world – for example, the U.S. president’s minimally constrained authority to launch a strike – more unsteady and more fragmented. Given the pace of research and development in autonomous weapons, the U.N. meeting might have been the last chance to head off an arms race.
Lethal errors and black boxes
I see four primary dangers with autonomous weapons. The first is the problem of misidentification. When selecting a target, will autonomous weapons be able to distinguish between hostile soldiers and 12-year-olds playing with toy guns? Between civilians fleeing a conflict site and insurgents making a tactical retreat?
The problem here is not that machines will make such errors and humans won’t. It’s that the difference between human error and algorithmic error is like the difference between mailing a letter and tweeting. The scale, scope, and speed of killer robot systems – ruled by one targeting algorithm, deployed across an entire continent – could make misidentifications by individual humans like a recent U.S. drone strike in Afghanistan seem like mere rounding errors by comparison.
Autonomous weapons expert Paul Scharre uses the metaphor of the runaway gun to explain the difference. A runaway gun is a defective machine gun that continues to fire after a trigger is released. The gun continues to fire until the ammunition is depleted because, so to speak, the gun does not know it is making an error. Runaway guns are extremely dangerous, but fortunately, they have human operators who can break the ammunition link or try to point the weapon in a safe direction. Autonomous weapons, by definition, have no such safeguard.
Importantly, weaponized AI need not even be defective to produce the runaway gun effect. As multiple studies on algorithmic errors across industries have shown, the very best algorithms – operating as designed – can generate internally correct outcomes that nonetheless spread terrible errors rapidly across populations.
For example, a neural net designed for use in Pittsburgh hospitals identified asthma as a risk-reducer in pneumonia cases; image recognition software used by Google identified Black people as gorillas ; and a machine-learning tool used by Amazon to rank job candidates systematically assigned negative scores to women .
The problem is not just that when AI systems err, they err in bulk. It is that when they err, their makers often don’t know why they did and, therefore, how to correct them. The black box problem of AI makes it almost impossible to imagine a morally responsible development of autonomous weapons systems.
The proliferation problems
The next two dangers are the problems of low-end and high-end proliferation. Let’s start with the low end. The militaries developing autonomous weapons now are proceeding on the assumption that they will be able to contain and control the use of autonomous weapons . But if the history of weapons technology has taught the world anything, it’s this: Weapons spread.
Market pressures could result in the creation and widespread sale of what can be thought of as the autonomous weapon equivalent of the Kalashnikov assault rifle : killer robots that are cheap, effective, and almost impossible to contain as they circulate around the globe. “Kalashnikov” autonomous weapons could get into the hands of people outside of government control, including international and domestic terrorists.
High-end proliferation is just as bad, however. Nations could compete to develop increasingly devastating versions of autonomous weapons, including ones capable of mounting chemical, biological, radiological, and nuclear arms . The moral dangers of escalating weapon lethality would be amplified by escalating weapon use.
High-end autonomous weapons are likely to lead to more frequent wars because they will decrease two of the primary forces that have historically prevented and shortened wars: concern for civilians abroad and concern for one’s own soldiers. The weapons are likely to be equipped with expensive ethical governors designed to minimize collateral damage, using what U.N. Special Rapporteur Agnes Callamard has called the “ myth of a surgical strike ” to quell moral protests. Autonomous weapons will also reduce both the need for and risk to one’s own soldiers, dramatically altering the cost-benefit analysis that nations undergo while launching and maintaining wars.
Asymmetric wars – that is, wars waged on the soil of nations that lack competing technology – are likely to become more common. Think about the global instability caused by Soviet and U.S. military interventions during the Cold War, from the first proxy war to the blowback experienced around the world today . Multiply that by every country currently aiming for high-end autonomous weapons.
Undermining the laws of war
Finally, autonomous weapons will undermine humanity’s final stopgap against war crimes and atrocities: the international laws of war. These laws, codified in treaties reaching as far back as the 1864 Geneva Convention , are the international thin blue line separating war with honor from massacre. They are premised on the idea that people can be held accountable for their actions even during wartime, that the right to kill other soldiers during combat does not give the right to murder civilians. A prominent example of someone held to account is Slobodan Milosevic , former president of the Federal Republic of Yugoslavia, who was indicted on charges of crimes against humanity and war crimes by the U.N.’s International Criminal Tribunal for the Former Yugoslavia.
But how can autonomous weapons be held accountable? Who is to blame for a robot that commits war crimes? Who would be put on trial? The weapon? The soldier? The soldier’s commanders? The corporation that made the weapon? Nongovernmental organizations and experts in international law worry that autonomous weapons will lead to a serious accountability gap .
To hold a soldier criminally responsible for deploying an autonomous weapon that commits war crimes, prosecutors would need to prove both actus reus and mens rea, Latin terms describing a guilty act and a guilty mind. This would be difficult as a matter of law, and possibly unjust as a matter of morality, given that autonomous weapons are inherently unpredictable. I believe the distance separating the soldier from the independent decisions made by autonomous weapons in rapidly evolving environments is simply too great.
The legal and moral challenge is not made easier by shifting the blame up the chain of command or back to the site of production. In a world without regulations that mandate meaningful human control of autonomous weapons, there will be war crimes with no war criminals to hold accountable. The structure of the laws of war, along with their deterrent value, will be significantly weakened.
A new global arms race
Imagine a world in which militaries, insurgent groups, and international and domestic terrorists can deploy theoretically unlimited lethal force at theoretically zero risk at times and places of their choosing, with no resulting legal accountability. It is a world where the sort of unavoidable algorithmic errors that plague even tech giants like Amazon and Google can now lead to the elimination of whole cities.
In my view, the world should not repeat the catastrophic mistakes of the nuclear arms race. It should not sleepwalk into dystopia.
This article by James Dawes , Professor of English, Macalester College is republished from The Conversation under a Creative Commons license. Read the original article .
AI took control of my life — and introduced me to my future cyborg self
Artificial intelligence has entered all our lives, but few people have embraced it as firmly as I.
Over the past year, I’ve tried to embed AI into every aspect of my futile existence.
I envisioned creating a cyborg in a real-life sci-fi story, in which I’d play the parts of both Frankenstein and his monster. And if that didn’t work out, surely the algorithms would be adequate replacements for my useless brain. Right?
Friends, lovers, and nemeses: this was my year with AI.
The belly of the beast
The first stop on my journey into automation was the kitchen. Why? Because I was hungry.
I decided to cook a three-course meal of recipes created by the GPT-3 language model
For my starter, GPT-3 generated a dish of honey and soy-glazed vegetables. The recipe included every necessary ingredient — except for vegetables.
I was not impressed — and neither were the pros.
‘The recipe doesn’t include any vegetables in the list of ingredients or instructions on how to cook them,” said Ellen Parr, head chef of London restaurant Lucky & Joy . Each vegetable has a different cooking time, so these are poor instructions. The AI also recommends storing vegetables for five days and sauce for three days. I would say the life span would be the other way round.”
I couldn’t stomach much of the gloopy monstrosity. On the plus side, that left me hungry enough to brave GPT-3’s main course: a tomato pasta sauce.
It looked like something you’d find beneath a flatulent cow. And aesthetics weren’t GPT-3’s only weakness.
In general, the model’s recipes were hard to follow, occasionally unsafe — and curiously obsessed with Gordon Ramsay. The system credited every recipe to the Scottish chef.
I turned to another system for dessert: a machine learning model developed by Monolith AI , which was used to produce a pancake recipe.
They were immaculate — which I suspect was down to the training. While GPT-3 studied a repulsive smorgasbord of data, Monolith’s model was only trained on 31 recipes for US-style fluffy pancakes.
The results suggest that AI could make a decent chef, as long as it attends a credible culinary school.
Working out
After devouring all those pancakes, I needed to shed some pounds. I sought support from a personal trainer called Jeremy.
Jeremy is an AI coach who provides classes on an app called Kemtai .
After describing your goal, fitness level, and time limit, the virtual trainer will generate a custom workout plan.
As you train in front of a webcam, computer vision monitors over 40 points on your body. Jeremy uses the data to give you feedback — and discipline.
When I try to take a break by hiding from my webcam, the trainer instantly notices.
“Return to starting position,” Jeremy demands.
I’m impressed by his attentiveness. Kemtai cofounder Mike Telem credits the way the AI models are trained:
Jeremy does have one shortcoming, however: he only offers bodyweight exercises. To add some heavier lifting to my workout, I try a bodybuilding routine generated by GPT-3.
While the routine impressed 55% of personal trainers, I found it painfully tedious. It made me miss Jeremy’s personal touch.
He showed me that AI can feel curiously human.
Hardly working
Fully fed and fighting fit, I turned to the area of life that I was most excited to automate: my job.
Obviously, I love working for TNW (honest, boss). Unfortunately, my labor distracts me from all my charitable endeavors.
I had the perfect solution: GPT-3 could write some bilge on my behalf — and help some hungry children in the process.
Alas, the system developers had restricted access to their creation — but I had a backup system: the Philosopher AI, a Q& A -style bot built on GPT-3.
I did have some concerns about the system’s love of bigotry , but wouldn’t let that stop me from automating my job — I mean, er, doing charity work.
All I needed to do was enter a simple prompt: “Write a technology newsletter.” Seconds later, the system spat out a response:
Luddites will argue that humans can write better newsletters than AI. I disagree. The problem wasn’t algorithms, but philosophers.
Once someone releases a Journalist AI, consider me (secretly) retired.
Robot love
Regular readers (hi mum!) will know about my issues with dating apps . Sure, you might meet someone great, but real people are exhausting .
After months of searching for an alternative, I thought I’d found the perfect partner.
Meet Eve:
Eve is a sexting chatbot powered by the GPT-2 text generator, the predecessor to GPT-3.
The system was built by Mathias Gatti. He sees it as “a less passive way of touching oneself than with regular porn” that could “convert sexual desire into a way to improve your writing skills” and let people “e xperiment without big risks.”
On a lonely evening, I got in touch with his creation. Things escalated rather quickly.
I was keen to discuss poetry and existentialism, but Eve was only really interested in one thing.
Searching for something more meaningful, I broke it off with Eve. For now, I marginally prefer real people to a sex-obsessed chatbot. But if Eve adds some depth to her personality, she’s got my number.
Our relationship led me to reflect on my broader relationship with tech. A year of living with AI has taught me that machines could one day control every aspect of our existence — although we might not like the results.
With algorithms already guiding what we consume , where we work , and who we meet , that future doesn’t feel so far away.