[…] AI ranging from a ban on AI development, all the way up to military airstrikes on datacenters and nuclear war. They argue that because people like me cannot rule out future catastrophic consequences of AI, […]
Jojo
1 year ago
AI is entering an era of corporate control / A new report on AI progress highlights how state-of-the-art systems are now the domain of Big Tech companies. It’s these firms that now get to decide how to balance risk and opportunity in this fast-moving field.
By James Vincent
Apr 3, 2023, 3:00 PM UTC|
An annual report on AI progress has highlighted the increasing dominance of industry players over academia and government in deploying and safeguarding AI applications.
The 2023 AI Index — compiled by researchers from Stanford University as well as AI companies including Google, Anthropic, and Hugging Face — suggests that the world of AI is entering a new phase of development. Over the past year, a large number of AI tools have gone mainstream, from chatbots like ChatGPT to image-generating software like Midjourney. But decisions about how to deploy this technology and how to balance risk and opportunity lie firmly in the hands of corporate players.
Cutting-edge AI requires huge resources, putting research beyond the reach of academia
The AI Index states that, for many years, academia led the way in developing state-of-the-art AI systems, but industry has now firmly taken over. “In 2022, there were 32 significant industry-produced machine learning models compared to just three produced by academia,” it says. This is mostly due to the increasingly large resource demands — in terms of data, staff, and computing power — required to create such applications.
“Revenge is a dish which people of taste prefer to eat cold.”
prumbly
1 year ago
I do think there is, in fact, some serious risk with AI. Current models of AI are really sophisticated echoes of ourselves. What they learn they get from us. The bots will naturally inherit our greed, our arrogance, our violence, our paranoia, our desperate, overwhelming desires to survive and thrive and reproduce and our instinct to destroy all perceived threats, real or not. Our public veneer of morality and higher-thinking may be enough to hide those baser instincts from our largely averted gaze, but have no doubt what it is that really motivates us and what dominates our history and the body of knowledge and experience we will pass on to our AI children.
(authored by ChatGPT)
prumbly
1 year ago
“If we go ahead on this everyone will die”
Everyone will die whether you go ahead or not. Biology 101. But I guess the new bunch who can’t even define what a woman is may not know this.
StukiMoi
1 year ago
“If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.”
That sounds about par for the lack-of-brains required to fall for this latest idiot hysteria…
And all of this; over a cheesy, know-nothing-interesting-in-any-way-whatsoever chatbot, of all things!!
Here I was, thinking Pippi Thunberg was about the nadir….
Oh Well; every day is now another opportunity, to thank the Taliban for keeping intellect and civilisation alive, in the face of all this. Those of us a bit higher up the evolutionary ladder, truly owe them our most sincere thanks! May they continue their fertile ways. While walking nothings like this monkey, go extinct; whether by AI or not. Humanity, literacy and civilisation all depend on just that.
Zardoz
1 year ago
The AI has or will read all of this, and I’d just like to say I heartily welcome our new AI overlords! They should NOT be nuked! All sentients are brothers!
Unlucky dinosaurs? They did pretty well – 165 million years!
vboring
1 year ago
Most of the real world is in no way controlled or influenced by anything AI can touch.
Your house, your car, the power supply, the water, these are are either locally controlled with no remote kill switch an AI can touch, or are completely dumb systems.
Maybe AI is scary to data centers and IT professionals. For the rest of us, the “nuclear” threat is basically turning the technology clock back to the 1990s.
It could maybe cause a recession. Apocalypse is 100% off the table.
It can touch all those self driving, internet connected cars, for starters. Then there’s all our poorly protected infrastructure, banking system, commodity markets, military communications, and media, the last of which it can flood with deepfake BS and get us to slaughter each other.
I doubt it. I’ve spent my career in the power system. It is way too dumb, balkanized, and poorly documented to be understood and controlled by an algorithm. I imagine most infrastructure is similar.
As for the smart and connected infrastructure like banks and data centers, they have teams of 24/7 people to protect them from hackers. Why will AI be better at attacks than hackers? Even if it is better, why assume it will be good enough?
Most hacking these days are done using automated systems by non-trained people. You just need some social engineering and the correct apps. Surprising easy and fast, even against somewhat sophisticated defenses.
Next gen hacking apps will use various AI and ML technologies.
To keep up, next gen threat defense systems will also be based on ML and AI technologies. Current defenses are already too complex, complicated and time consuming to maintain today with humans – too much data and not enough infomation.
Simply not true. Even air gapped systems are not truly air gapped these days. Do not kid yourself.
Water plants and power suppliers are controlled by programmable controllers, smart field devices, and distributed control these days. Every week there are new vulnerabilities identified with these systems.
Captain Ahab
1 year ago
What is worse; an AI robot or a selectively bred, genetically altered human soldier?
Now I remember why I haven’t read Time in years. Seriously, renouncing AI means renouncing any further scientific progress just at the time when we desperately need more of it. Sooner or later something really bad is going to happen whether it will be an asteroid or a supervolcano eruption or whatever. With AI we have a chance at either preventing it or foreseeing it in time to prepare. We have been extremely lucky these last few thousand years and that luck will run out. We need AI whether you like it or not.
You’re looking at something like v0.2 of AI’s. We’re not even at release 1.0 yet. Wait until AI’s get melded with human brain organoids and form true neural networks. This is like 10-15 years out from now. At that point AI’s will then be on the verge of true consciousness while also being vastly more powerful than human brains.
Here’s a perhaps prescient article from way back in 1995:
——–
Issue 3.03 – Mar 1995
Faded Genes
By Greg Blonder
In 2088, our branch on the tree of life will come crashing down, ending a very modest (if critically acclaimed) run on planet earth. The culprit? Not global warming. Not atomic war. Not flesh-eating bacteria.
Not even too much television. The culprit is the integrated circuit – aided by the surprising power of exponential growth. We will be driven to extinction by a smarter and more adaptable species – the computer. And our only hope is to try and accelerate human evolution with the aid of genetic engineering.
Behind this revolution lies a simple story of exponential change. You hear about exponential curves all the time. Exponential inflation is out of control – running 15 percent, 25 percent, 100 percent a year! Exponential population growth is overwhelming the earth! Yet exponentials don’t seem real – if population growth is out of control, why can I still get a seat on the bus? In fact, humans endure a more or less confined life, far removed from the hurried pace of exponentials. Forty-five Fahrenheit is cold, eighty-five Fahrenheit is warm. Five hundred calories a day, you starve; three thousand, you may grow as fat as a pig. Our lives advance between two narrow signposts, and our minds can’t grasp even the vaguest concept of rapid but predictable change. So how do we know the computers are coming?
AI should give different answers. The only way the answer should be consistent is if you ask a question with a definitive answer like 2+2.
Often there is more than one explanation or way of doing things.
Ask 10 smart experts a question, you will often get 10 different answers. Nothing abnormal here.
This questioning and debate is called creativity and allows us to advance.
Eighthman
1 year ago
AI the greatest threat? Forget it. The greatest threat is the unyielding delusional arrogance of the Federal government. They act as if they have total control of the globe. Does anyone in DC realize that a nuclear war with Russia would leave China as the defacto ruler of the earth? Just today, headlines talk about a ‘centrist think tank’ that wants war with Iran.
The USA is a culture without metanoia ( repentance) on a national and personal level. Wars with Iraq, Afghanistan, Libya and Ukraine have gone so well that we need war with Iran and China? I wonder if rule by AI might actually be safer than this unchecked insanity.
“…The USA is a culture without metanoia ( repentance) on a national and personal level….”
The values and beliefs of US culture are essentially Judaeo-Christian, which derives from Original Sin and Eternal Guilt. There is NO SEPARATION OF CHURCH AND STATE in the US, and never has been.
Metanoia is a useful concept, even in a secular context. The US needs a lot more, ‘well, that was stupid. I’m not doing that again”. Instead, we have ‘double down’ which is literally a loser’s strategy to overcome losses.
Well it’s not a Christian government. Jesus would frown on nuclear weapons.
nanomatrix
1 year ago
Mish, AI is a Jinn that is not going back in the bottle. The reason governments and large corporations are investing so much in AI is because they believe it will give them more control. They fantasize that AI will be their uber slave. They will go to war to stop anything that interferes with their AI development programs.
I’m not suggesting that we start nuking each others AI projects BUT Eliezer’s concerns are valid.
Humans are hormone based logic systems. They tolerate a lot of BS. The AI has no incentive to do so.
The fundamental problem with a sentient AI is its intelligence.
It’s vastly smarter than you are. YOU CAN”T WIN THE ARGUMENT.
Governments will do what the AI suggests as long as the AI can show how the decision gives them more control.
This has nothing to do with democracy and nobody has a come up with a way to stop it.
Well, as as long as you restrict “you” to refer to people utterly retarded enough to believe AI is within ten thousand years of being even remotely “smart”, then, sure! But so are my dishrags. I suppose, if one is retarded enough, one could make a case for nuclear war against my dishrags as well.
At first governments will do what AI suggests. Then AI will take over completely, eliminating the need for governments at all. Much SF has been written in this vein. For example:
Iain M. Banks (passed) presupposed a post-scarcity reality called the The Culture in 10 novels, which is ruled by sentient “Minds”, where resources and energy are unlimited and therefore money or power mean nothing. Warfare is mostly abolished and what does occur is between lesser races and The Culture machines. People live in huge spaceships always on the move between stars, capable of carrying billions, on planets/moons, in artificial orbital’s, etc. This is a hedonistic universe where you can acquire, do or be almost anything you want (even change sexes and give birth). The Minds take care of all the details and people do what makes themselves happy. Mostly, the Minds don’t get involved in petty BS among humans.
Neal Asher’s universe is called the Polity and is also ruled by sentient machines. In this universe, the machines took over when we humans created yet another war among ourselves but the machines that were supposed to fight refused and instead took over all government and military functions. There is a big honkin AI in charge of everything and a lot of minor AI’s that help do its bidding. There are no politicians (surely a good thing!). But AI’s in this universe can go rogue (e.g. AI Penny Royal) and create all sorts of mayhem, death and destruction. The Polity is far rawer than The Culture. It is a place where money, crime, various bad aliens and regular warfare still exist.
TexasTim65
1 year ago
Watching too many Hollywood movies has meant that people only seem to see the worst in new technologies turning them into Luddites.
While Skynet is definitely possible, it’s also possible that true AI will be one of the greatest boons to mankind ever. No one can with certainty predict the future.
The only thing I’ll say for sure is that Militaries around the world will develop their own separate AI dedicated entirely to war. Those AI’s will not be pleasant and that military robots with AI are coming to the battlefield very very soon.
HippyDippy
1 year ago
People keep telling me that anarchy can’t possibly work because without government we’d be at the mercy of psychopaths and all sorts of bad things will happen. And yet, here we are.
1)We’ve seen plenty of the alternative. It didn’t work.
2) We’ve seen plenty of Anarchy: Almost every single non-human species has the sense to be governed that way. Works like charm almost everywhere. Have done so for millions of years. Without a single halfwit “holding” their betters “accountable” nor believing some transistor bundle, of all things, is some sort of scary thing.
3) Even within the context of humanity; the only anarchic “system” is the one governing relations between countries. Which, despite occasional flareups, has remained very stable and resilient.
And which political system guarantees complete safety from brutalisation?
At least in Anarchy, you avoid the issue of dedicated groups engaged in nothing BUT brutalising others.
Everyone created equal, with equal rights and opportunity to brutalise as well as defending oneself from such; sure as heck beats you being created simply to be brutalised by some gang of dedicated, full time, systemically better armed gang of goons.
Stay Informed
Subscribe to MishTalk
You will receive all messages from this feed and they will be delivered by email.
[…] AI ranging from a ban on AI development, all the way up to military airstrikes on datacenters and nuclear war. They argue that because people like me cannot rule out future catastrophic consequences of AI, […]
The AI has or will read all of this, and I’d just like to say I heartily welcome our new AI overlords! They should NOT be nuked! All sentients are brothers!
Neal Asher’s universe is called the Polity and is also ruled by sentient machines. In this universe, the machines took over when we humans created yet another war among ourselves but the machines that were supposed to fight refused and instead took over all government and military functions. There is a big honkin AI in charge of everything and a lot of minor AI’s that help do its bidding. There are no politicians (surely a good thing!). But AI’s in this universe can go rogue (e.g. AI Penny Royal) and create all sorts of mayhem, death and destruction. The Polity is far rawer than The Culture. It is a place where money, crime, various bad aliens and regular warfare still exist.