To Stop AI, Lunatics Are Willing to Risk a Global Nuclear War

Pausing AI Developments Isn’t Enough. We Need to Shut it All Down

Please consider Pausing AI Developments Isn’t Enough. We Need to Shut it All Down by Eliezer Yudkowsky, emphasis mine.

An open letter published today calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” This 6-month moratorium would be better than no moratorium.

If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.

Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems. If we actually do this, we are all going to die.

Here’s what would actually need to be done: The moratorium on new large training runs needs to be indefinite and worldwide. There can be no exceptions, including for governments or militaries. 

Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.

Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.

We are not ready. We are not on track to be significantly readier in the foreseeable future. If we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong. Shut it down.

I Didn’t Say What I Said

Undoubtedly due to harsh criticism, Yudkowsky now says he didn’t say what he said. 

No, you did not suggest a nuclear “first” strike. Rather, you stupidly suggested that countries be “willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.”

And make no mistake about this, bombing a data center in China would have far greater than some nominal chance of starting a nuclear war.

If Conventional Weapons Don’t Work

In the lead image Tweet, Yudkowsky undermined even his own weak denial.

Yudkowsky would only resort to nuclear weapons if conventional weapons didn’t work. Lovely.

Nuclear War Risk Less Than AI Risk

Not to worry, a nuclear exchange wouldn’t kill everyone but it is certain that AI would.

Seriously Crazy

Any scientist seriously proposing AI is certain to kill all of mankind and that preemptive nuclear war is preferable, is seriously crazy.

Should Yudkowsky resign from his current job and undertake a new career for which he is highly suited, that being a science fiction writer?

The answer isn’t clear, actually. Yudkowsky now looks like a raging mad lunatic. As a science fiction writer he might be taken far more seriously. 

On the Proposed 6 Month AI Pause? Why Not 23? Forever? Better Yet, None at All

Elon Musk, the WSJ, and a group of signatories seek a 6-month pause in AI Development.

Please consider my take On the Proposed 6 Month AI Pause? Why Not 23? Forever? Better Yet, None at All

When I wrote that post, I was unaware of Yudkowsky’s op-ed in Time magazine and his even worse follow-up Tweets.

The lunatic idea that nuclear war is preferable to AI further reinforces the strong case for doing nothing at all. 

Click on the above link for discussion.

This post originated on MishTalk.Com.

Thanks for Tuning In!

Please Subscribe to MishTalk Email Alerts.

Subscribers get an email alert of each post as they happen. Read the ones you like and you can unsubscribe at any time.

If you have subscribed and do not get email alerts, please check your spam folder.

Mish

Subscribe to MishTalk Email Alerts.

Subscribers get an email alert of each post as they happen. Read the ones you like and you can unsubscribe at any time.

This post originated on MishTalk.Com

Thanks for Tuning In!

Mish

Subscribe
Notify of
guest

53 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
trackback

[…] AI ranging from a ban on AI development, all the way up to military airstrikes on datacenters and nuclear war. They argue that because people like me cannot rule out future catastrophic consequences of AI, […]

Jojo
Jojo
1 year ago
AI is entering an era of corporate control / A new report on AI progress highlights how state-of-the-art systems are now the domain of Big Tech companies. It’s these firms that now get to decide how to balance risk and opportunity in this fast-moving field.
By James Vincent
Apr 3, 2023, 3:00 PM UTC|
An annual report on AI progress has highlighted the increasing dominance of industry players over academia and government in deploying and safeguarding AI applications.
The 2023 AI Index — compiled by researchers from Stanford University as well as AI companies including Google, Anthropic, and Hugging Face — suggests that the world of AI is entering a new phase of development. Over the past year, a large number of AI tools have gone mainstream, from chatbots like ChatGPT to image-generating software like Midjourney. But decisions about how to deploy this technology and how to balance risk and opportunity lie firmly in the hands of corporate players.
Cutting-edge AI requires huge resources, putting research beyond the reach of academia
The AI Index states that, for many years, academia led the way in developing state-of-the-art AI systems, but industry has now firmly taken over. “In 2022, there were 32 significant industry-produced machine learning models compared to just three produced by academia,” it says. This is mostly due to the increasingly large resource demands — in terms of data, staff, and computing power — required to create such applications.
Lisa_Hooker
Lisa_Hooker
1 year ago
Reply to  Jojo
Thanks for the up to date information, Jojo.
Corporate husbandry of AI will be interesting considering how smoothly things have gone with corporate social media.
Jack
Jack
1 year ago
Dude needs to relax.
People have been worrying about new technologies since fire was discovered.
Yup, and a lot of people have died from fire.
Lisa_Hooker
Lisa_Hooker
1 year ago
Reply to  Jack
“Revenge is a dish which people of taste prefer to eat cold.”
prumbly
prumbly
1 year ago
I do think there is, in fact, some serious risk with AI. Current models of AI are really sophisticated echoes of ourselves. What they learn they get from us. The bots will naturally inherit our greed, our arrogance, our violence, our paranoia, our desperate, overwhelming desires to survive and thrive and reproduce and our instinct to destroy all perceived threats, real or not. Our public veneer of morality and higher-thinking may be enough to hide those baser instincts from our largely averted gaze, but have no doubt what it is that really motivates us and what dominates our history and the body of knowledge and experience we will pass on to our AI children.
(authored by ChatGPT)
prumbly
prumbly
1 year ago
“If we go ahead on this everyone will die”
Everyone will die whether you go ahead or not. Biology 101. But I guess the new bunch who can’t even define what a woman is may not know this.
StukiMoi
StukiMoi
1 year ago
If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.”
That sounds about par for the lack-of-brains required to fall for this latest idiot hysteria…
And all of this; over a cheesy, know-nothing-interesting-in-any-way-whatsoever chatbot, of all things!!
Here I was, thinking Pippi Thunberg was about the nadir….
Oh Well; every day is now another opportunity, to thank the Taliban for keeping intellect and civilisation alive, in the face of all this. Those of us a bit higher up the evolutionary ladder, truly owe them our most sincere thanks! May they continue their fertile ways. While walking nothings like this monkey, go extinct; whether by AI or not. Humanity, literacy and civilisation all depend on just that.
Zardoz
Zardoz
1 year ago

The AI has or will read all of this, and I’d just like to say I heartily welcome our new AI overlords! They should NOT be nuked! All sentients are brothers!

Maximus_Minimus
Maximus_Minimus
1 year ago
Reply to  Zardoz
Sentients? That would be like 100% of AI, and maybe 10% of human population?
Zardoz
Zardoz
1 year ago
I am proud to count myself among the elite 10%.
FromBrussels2
FromBrussels2
1 year ago
….so happy and grateful to have been a young man during the seventies and eighties…..F the F future , we re Fd anyway, no doubt about it …
Maximus_Minimus
Maximus_Minimus
1 year ago
Reply to  FromBrussels2
I know what you mean. At least, we know the culprits, and can anticipate it, unlike the poor unlucky dinosaurs. /nqs
prumbly
prumbly
1 year ago
Unlucky dinosaurs? They did pretty well – 165 million years!
vboring
vboring
1 year ago
Most of the real world is in no way controlled or influenced by anything AI can touch.
Your house, your car, the power supply, the water, these are are either locally controlled with no remote kill switch an AI can touch, or are completely dumb systems.
Maybe AI is scary to data centers and IT professionals. For the rest of us, the “nuclear” threat is basically turning the technology clock back to the 1990s.
It could maybe cause a recession. Apocalypse is 100% off the table.
Zardoz
Zardoz
1 year ago
Reply to  vboring
It can touch all those self driving, internet connected cars, for starters. Then there’s all our poorly protected infrastructure, banking system, commodity markets, military communications, and media, the last of which it can flood with deepfake BS and get us to slaughter each other.
Will it? I don’t know.
vboring
vboring
1 year ago
Reply to  Zardoz
I doubt it. I’ve spent my career in the power system. It is way too dumb, balkanized, and poorly documented to be understood and controlled by an algorithm. I imagine most infrastructure is similar.
As for the smart and connected infrastructure like banks and data centers, they have teams of 24/7 people to protect them from hackers. Why will AI be better at attacks than hackers? Even if it is better, why assume it will be good enough?
Jack
Jack
1 year ago
Reply to  vboring
Most hacking these days are done using automated systems by non-trained people. You just need some social engineering and the correct apps. Surprising easy and fast, even against somewhat sophisticated defenses.
Next gen hacking apps will use various AI and ML technologies.
To keep up, next gen threat defense systems will also be based on ML and AI technologies. Current defenses are already too complex, complicated and time consuming to maintain today with humans – too much data and not enough infomation.
Jojo
Jojo
1 year ago
Reply to  vboring
Have you heard of “Smart homes”?
Jack
Jack
1 year ago
Reply to  vboring
Simply not true. Even air gapped systems are not truly air gapped these days. Do not kid yourself.
Water plants and power suppliers are controlled by programmable controllers, smart field devices, and distributed control these days. Every week there are new vulnerabilities identified with these systems.
Captain Ahab
Captain Ahab
1 year ago
What is worse; an AI robot or a selectively bred, genetically altered human soldier?
Maximus_Minimus
Maximus_Minimus
1 year ago
Reply to  Captain Ahab
Selectively bred human soldier existed since the dawn of time, only it was kindly called cannon fodder by those who selected him out of the flock.
StukiMoi
StukiMoi
1 year ago
Reply to  Captain Ahab
“What is worse; an AI robot or a selectively bred, genetically altered human soldier?”
What’s worsT, is anyone dumb enough to waste time on either such childish Hobgoblin.
prumbly
prumbly
1 year ago
Reply to  Captain Ahab
It will be interesting to these two in a face-off
Doug78
Doug78
1 year ago
Now I remember why I haven’t read Time in years. Seriously, renouncing AI means renouncing any further scientific progress just at the time when we desperately need more of it. Sooner or later something really bad is going to happen whether it will be an asteroid or a supervolcano eruption or whatever. With AI we have a chance at either preventing it or foreseeing it in time to prepare. We have been extremely lucky these last few thousand years and that luck will run out. We need AI whether you like it or not.
Lisa_Hooker
Lisa_Hooker
1 year ago
Reply to  Doug78
AI#1: Asteroid on a collision course!
AI#2: Remain seated. Place your head between your knees while pressing your knees to your head with your hands. Keep your elbows close to your sides.
Doug78
Doug78
1 year ago
Reply to  Lisa_Hooker
1) move asteroid
2) move a lot of people away from the super volcano and the others off-planet.
or
Go back to the stone age.
Lisa_Hooker
Lisa_Hooker
1 year ago
That’s silly.
We need to move AI development to a deserted tropical island.
Then do the AI work behind high walls with electrified fences.
And be very careful about throwing red meat to the developers.
We could even allow tourists in golf carts to look at the AI developers working.
There’s always easy solutions to Armageddon.
chetmurphy
chetmurphy
1 year ago
Go to any of the AIs. Request something. And Request it again.
Do you get the same answer each time for the same request?
I don’t. And the reason? AI is built using random algos.
And random processes by definition cannot be predicted or controlled.
So is Intelligence random? Think about it.
What does that say about super Intelligence?
Jojo
Jojo
1 year ago
Reply to  chetmurphy
You’re looking at something like v0.2 of AI’s. We’re not even at release 1.0 yet. Wait until AI’s get melded with human brain organoids and form true neural networks. This is like 10-15 years out from now. At that point AI’s will then be on the verge of true consciousness while also being vastly more powerful than human brains.
Here’s a perhaps prescient article from way back in 1995:
——–
Issue 3.03 – Mar 1995
Faded Genes
By Greg Blonder
In 2088, our branch on the tree of life will come crashing down, ending a very modest (if critically acclaimed) run on planet earth. The culprit? Not global warming. Not atomic war. Not flesh-eating bacteria.
Not even too much television. The culprit is the integrated circuit – aided by the surprising power of exponential growth. We will be driven to extinction by a smarter and more adaptable species – the computer. And our only hope is to try and accelerate human evolution with the aid of genetic engineering.
Behind this revolution lies a simple story of exponential change. You hear about exponential curves all the time. Exponential inflation is out of control – running 15 percent, 25 percent, 100 percent a year! Exponential population growth is overwhelming the earth! Yet exponentials don’t seem real – if population growth is out of control, why can I still get a seat on the bus? In fact, humans endure a more or less confined life, far removed from the hurried pace of exponentials. Forty-five Fahrenheit is cold, eighty-five Fahrenheit is warm. Five hundred calories a day, you starve; three thousand, you may grow as fat as a pig. Our lives advance between two narrow signposts, and our minds can’t grasp even the vaguest concept of rapid but predictable change. So how do we know the computers are coming?
Zardoz
Zardoz
1 year ago
Reply to  chetmurphy
I’d you’re talking about creativity, yes.
Jack
Jack
1 year ago
Reply to  chetmurphy
AI should give different answers. The only way the answer should be consistent is if you ask a question with a definitive answer like 2+2.
Often there is more than one explanation or way of doing things.
Ask 10 smart experts a question, you will often get 10 different answers. Nothing abnormal here.
This questioning and debate is called creativity and allows us to advance.
Eighthman
Eighthman
1 year ago
AI the greatest threat? Forget it. The greatest threat is the unyielding delusional arrogance of the Federal government. They act as if they have total control of the globe. Does anyone in DC realize that a nuclear war with Russia would leave China as the defacto ruler of the earth? Just today, headlines talk about a ‘centrist think tank’ that wants war with Iran.
The USA is a culture without metanoia ( repentance) on a national and personal level. Wars with Iraq, Afghanistan, Libya and Ukraine have gone so well that we need war with Iran and China? I wonder if rule by AI might actually be safer than this unchecked insanity.
Captain Ahab
Captain Ahab
1 year ago
Reply to  Eighthman
“…The USA is a culture without metanoia ( repentance) on a national and personal level….”
The values and beliefs of US culture are essentially Judaeo-Christian, which derives from Original Sin and Eternal Guilt. There is NO SEPARATION OF CHURCH AND STATE in the US, and never has been.
Eighthman
Eighthman
1 year ago
Reply to  Captain Ahab
Metanoia is a useful concept, even in a secular context. The US needs a lot more, ‘well, that was stupid. I’m not doing that again”. Instead, we have ‘double down’ which is literally a loser’s strategy to overcome losses.
Zardoz
Zardoz
1 year ago
Reply to  Captain Ahab
Well it’s not a Christian government. Jesus would frown on nuclear weapons.
nanomatrix
nanomatrix
1 year ago
Mish, AI is a Jinn that is not going back in the bottle. The reason governments and large corporations are investing so much in AI is because they believe it will give them more control. They fantasize that AI will be their uber slave. They will go to war to stop anything that interferes with their AI development programs.
I’m not suggesting that we start nuking each others AI projects BUT Eliezer’s concerns are valid.
Humans are hormone based logic systems. They tolerate a lot of BS. The AI has no incentive to do so.
The fundamental problem with a sentient AI is its intelligence.
It’s vastly smarter than you are. YOU CAN”T WIN THE ARGUMENT.
Governments will do what the AI suggests as long as the AI can show how the decision gives them more control.
This has nothing to do with democracy and nobody has a come up with a way to stop it.
Lisa_Hooker
Lisa_Hooker
1 year ago
Reply to  nanomatrix
So how can you prove that things are not already being run by a poorly-coded AI?
That might ‘splain a few things Lucy!
nanomatrix
nanomatrix
1 year ago
Reply to  Lisa_Hooker
Have you seen anything intelligent coming out of the Biden administration?
Lisa_Hooker
Lisa_Hooker
1 year ago
Reply to  nanomatrix
Perhaps Joe B’s NVRAM wearing out is an explaination.
Doug78
Doug78
1 year ago
Reply to  nanomatrix
That’s interesting. If AI is put into a bottle and thrown away eventually someone will find it, rub it and liberate it.
StukiMoi
StukiMoi
1 year ago
Reply to  nanomatrix
“It’s vastly smarter than you are.”
Well, as as long as you restrict “you” to refer to people utterly retarded enough to believe AI is within ten thousand years of being even remotely “smart”, then, sure! But so are my dishrags. I suppose, if one is retarded enough, one could make a case for nuclear war against my dishrags as well.
Jojo
Jojo
1 year ago
Reply to  nanomatrix
At first governments will do what AI suggests. Then AI will take over completely, eliminating the need for governments at all. Much SF has been written in this vein. For example:
Iain M. Banks (passed) presupposed a post-scarcity reality called the The Culture in 10 novels, which is ruled by sentient “Minds”, where resources and energy are unlimited and therefore money or power mean nothing. Warfare is mostly abolished and what does occur is between lesser races and The Culture machines. People live in huge spaceships always on the move between stars, capable of carrying billions, on planets/moons, in artificial orbital’s, etc. This is a hedonistic universe where you can acquire, do or be almost anything you want (even change sexes and give birth). The Minds take care of all the details and people do what makes themselves happy. Mostly, the Minds don’t get involved in petty BS among humans.

Neal Asher’s universe is called the Polity and is also ruled by sentient machines. In this universe, the machines took over when we humans created yet another war among ourselves but the machines that were supposed to fight refused and instead took over all government and military functions. There is a big honkin AI in charge of everything and a lot of minor AI’s that help do its bidding. There are no politicians (surely a good thing!). But AI’s in this universe can go rogue (e.g. AI Penny Royal) and create all sorts of mayhem, death and destruction. The Polity is far rawer than The Culture. It is a place where money, crime, various bad aliens and regular warfare still exist.

TexasTim65
TexasTim65
1 year ago
Watching too many Hollywood movies has meant that people only seem to see the worst in new technologies turning them into Luddites.
While Skynet is definitely possible, it’s also possible that true AI will be one of the greatest boons to mankind ever. No one can with certainty predict the future.
The only thing I’ll say for sure is that Militaries around the world will develop their own separate AI dedicated entirely to war. Those AI’s will not be pleasant and that military robots with AI are coming to the battlefield very very soon.
HippyDippy
HippyDippy
1 year ago
People keep telling me that anarchy can’t possibly work because without government we’d be at the mercy of psychopaths and all sorts of bad things will happen. And yet, here we are.
Doug78
Doug78
1 year ago
Reply to  HippyDippy
You have never seen true anarchy.
StukiMoi
StukiMoi
1 year ago
Reply to  Doug78
1)We’ve seen plenty of the alternative. It didn’t work.
2) We’ve seen plenty of Anarchy: Almost every single non-human species has the sense to be governed that way. Works like charm almost everywhere. Have done so for millions of years. Without a single halfwit “holding” their betters “accountable” nor believing some transistor bundle, of all things, is some sort of scary thing.
3) Even within the context of humanity; the only anarchic “system” is the one governing relations between countries. Which, despite occasional flareups, has remained very stable and resilient.
Empirically, there’s no contest at all.
Doug78
Doug78
1 year ago
Reply to  StukiMoi
I was talking about the band “True Anarchy”.
Lisa_Hooker
Lisa_Hooker
1 year ago
Reply to  Doug78
I once lived in Chicago.
Briefly.
I have lived true anarchy.
Naphtali
Naphtali
1 year ago
Reply to  HippyDippy
Indeed, at the mercy of psychopaths.
Lisa_Hooker
Lisa_Hooker
1 year ago
Reply to  Naphtali
Republican or Democrat psychopaths, doesn’t matter.
Zardoz
Zardoz
1 year ago
Reply to  HippyDippy
Anarchy won’t work because primates form gangs and brutalize each other. It’s hardwired.
StukiMoi
StukiMoi
1 year ago
Reply to  Zardoz
And which political system guarantees complete safety from brutalisation?
At least in Anarchy, you avoid the issue of dedicated groups engaged in nothing BUT brutalising others.
Everyone created equal, with equal rights and opportunity to brutalise as well as defending oneself from such; sure as heck beats you being created simply to be brutalised by some gang of dedicated, full time, systemically better armed gang of goons.

Stay Informed

Subscribe to MishTalk

You will receive all messages from this feed and they will be delivered by email.