
Pausing AI Developments Isn’t Enough. We Need to Shut it All Down
Please consider Pausing AI Developments Isn’t Enough. We Need to Shut it All Down by Eliezer Yudkowsky, emphasis mine.
An open letter published today calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” This 6-month moratorium would be better than no moratorium.
If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.
Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems. If we actually do this, we are all going to die.
Here’s what would actually need to be done: The moratorium on new large training runs needs to be indefinite and worldwide. There can be no exceptions, including for governments or militaries.
Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.
We are not ready. We are not on track to be significantly readier in the foreseeable future. If we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong. Shut it down.
I Didn’t Say What I Said
Undoubtedly due to harsh criticism, Yudkowsky now says he didn’t say what he said.
No, you did not suggest a nuclear “first” strike. Rather, you stupidly suggested that countries be “willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.”
And make no mistake about this, bombing a data center in China would have far greater than some nominal chance of starting a nuclear war.
If Conventional Weapons Don’t Work
In the lead image Tweet, Yudkowsky undermined even his own weak denial.
Yudkowsky would only resort to nuclear weapons if conventional weapons didn’t work. Lovely.
Nuclear War Risk Less Than AI Risk
Not to worry, a nuclear exchange wouldn’t kill everyone but it is certain that AI would.

Seriously Crazy
Any scientist seriously proposing AI is certain to kill all of mankind and that preemptive nuclear war is preferable, is seriously crazy.
Should Yudkowsky resign from his current job and undertake a new career for which he is highly suited, that being a science fiction writer?
The answer isn’t clear, actually. Yudkowsky now looks like a raging mad lunatic. As a science fiction writer he might be taken far more seriously.
On the Proposed 6 Month AI Pause? Why Not 23? Forever? Better Yet, None at All

Elon Musk, the WSJ, and a group of signatories seek a 6-month pause in AI Development.
Please consider my take On the Proposed 6 Month AI Pause? Why Not 23? Forever? Better Yet, None at All
When I wrote that post, I was unaware of Yudkowsky’s op-ed in Time magazine and his even worse follow-up Tweets.
The lunatic idea that nuclear war is preferable to AI further reinforces the strong case for doing nothing at all.
Click on the above link for discussion.
This post originated on MishTalk.Com.
Thanks for Tuning In!
Please Subscribe to MishTalk Email Alerts.
Subscribers get an email alert of each post as they happen. Read the ones you like and you can unsubscribe at any time.
If you have subscribed and do not get email alerts, please check your spam folder.
Mish


[…] AI ranging from a ban on AI development, all the way up to military airstrikes on datacenters and nuclear war. They argue that because people like me cannot rule out future catastrophic consequences of AI, […]
The AI has or will read all of this, and I’d just like to say I heartily welcome our new AI overlords! They should NOT be nuked! All sentients are brothers!
Neal Asher’s universe is called the Polity and is also ruled by sentient machines. In this universe, the machines took over when we humans created yet another war among ourselves but the machines that were supposed to fight refused and instead took over all government and military functions. There is a big honkin AI in charge of everything and a lot of minor AI’s that help do its bidding. There are no politicians (surely a good thing!). But AI’s in this universe can go rogue (e.g. AI Penny Royal) and create all sorts of mayhem, death and destruction. The Polity is far rawer than The Culture. It is a place where money, crime, various bad aliens and regular warfare still exist.