The answers, as most will quicky access, are no and no. So, how much of the AI hype is real?
AI Can’t Teach AI New Tricks
Wall Street Journal writer Andy Kessler has a great observation today: AI Can’t Teach AI New Tricks
OpenAI just raised $6.6 billion, the largest venture-capital funding of all time, at a $157 billion valuation. Oh, and the company will lose $5 billion this year and is projecting $44 billion in losses through 2029.
We are bombarded with breathless press releases. Anthropic CEO Dario Amodei predicts that “powerful AI” will surpass human intelligence in many areas by 2026. OpenAI claims its latest models are “designed to spend more time thinking before they respond. They can reason through complex tasks and solve harder problems.” Thinking? Reasoning? Will AI become humanlike? Conscious?
I hate to be the one to throw it, but here’s some cold water on the AI hype cycle:
Moravec’s paradox: Babies are smarter than AI. In 1988 robotics researcher Hans Moravec noted that “It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.” Most innate skills are built into our DNA, and many of them are unconscious.
AI has a long way to go. Last week, Apple AI researchers seemed to agree, noting that “current LLMs are not capable of genuine logical reasoning; instead, they attempt to replicate the reasoning steps observed in their training data.”
Linguistic apocalypse paradox: As I’ve noted before, AI smarts come from human logic embedded between words and sentences. Large language models need human words as input to become more advanced. But some researchers believe we’ll run out of written words to train models sometime between 2026 and 2032.
Remember, you can’t train AI models on AI-generated prose. That leads to what’s known as model collapse. Output becomes gibberish.
Current models train on 30 trillion human words. To be Moore’s Law-like, does this scale 1000 times over a decade to 30 quadrillion tokens? Are there even that many words written? Writers, you better get crackin’.
Scaling paradox: Early indications suggest large language models may follow so-called power-law curves. Google researcher Dagang Wei thinks that “increasing model size, dataset size, or computation can lead to significant performance boosts, but with diminishing returns as you scale up.” Yes, large language models could hit a wall.
Spending paradox: Data centers currently have an almost insatiable demand for graphics processing units to power AI training. Nvidia generated $30 billion in revenue last quarter and expectations are for $177 billion in revenue in 2025 and $207 billion in 2026. But venture capitalist David Cahn of Sequoia Capital wonders if this is sustainable. He thinks the AI industry needs to see $600 billion in revenue to pay back all the AI infrastructure spending so far. Industry leader OpenAI expects $3.7 billion in revenue this year, $12 billion next year, and forecasts $100 billion, but not until 2029. It could take a decade of growth to justify today’s spending on GPU chips.
Goldman Sachs’s head of research wrote a report, “GenAI: Too much spend, too little benefit?” He was being nice with the question mark. Nobel laureate and Massachusetts Institute of Technology economist Daron Acemoglu thinks AI can perform only 5% of jobs and tells Bloomberg, “A lot of money is going to get wasted.” Add to that the cost of power—a ChatGPT query uses nearly 10 times the electricity of a Google search. Microsoft is firing up one of the nuclear reactors at Three Mile Island to accommodate rising power needs. Yikes.
I’m convinced AI will transform our lives for the better, but it isn’t a straight line up.
DotCom Bust Comparison
That may be one of the best researched and link-annotated short articles ever.
It reminds me of all the click-counting and ad revenue hype in the DotCom bust.
Ad revenue from clicks eventually came, but from Google in 2004, not Gemstar in 2000. Does anyone even remember Gemstar (GMST)?
During the technology boom of the late 1990s, investors fell in love with Gemstar-TV Guide International Inc. The Los Angeles-based company seemed to hold the key to the futuristic world of interactive television, just as Internet portals like Yahoo became the gateway to the Web.
Gemstar’s patented on-screen program-guide technology was expected to be vital to viewers navigating their way around an increasing array of TV channels and cable services.
The stock fell hard after the dot-com crash, from a high of $107.43 in March 2000 to a low of $2.36 in September 2002.
We also had Excite@Home, Lycos, Global Crossing, Enron, and countless names I don’t even remember, most of which went bankrupt.
BottomLineLaw discusses Silicon Valley After the Dot-Com Crash.
The Excite@Home headquarters sat empty for five years before Stanford finally bought it in 2007 and turned it into an extension of its outpatient medical clinic.
Irrational Exuberance
In 1996, then Fed Chair Alan Greenspan was in on it. Greenspan correctly warned of “Irrational Exuberance” in a televised speech.
“Clearly, sustained low inflation implies less uncertainty about the future, and lower risk premiums imply higher prices of stocks and other earning assets. We can see that in the inverse relationship exhibited by price/earnings ratios and the rate of inflation in the past. But how do we know when irrational exuberance has unduly escalated asset values, which then become subject to unexpected and prolonged contractions as they have in Japan over the past decade?”
Greenspan Becomes a True Believer
However, by 2000, Greenspan became a true believer.
Fed minutes show that right at the peak of the DotCom bubble and a big stock market collapse, Greenspan’s big worry was the economy was overheating due to the productivity miracle.
Worried about a Y2K crash, the Fed pumped the economy like mad accelerating the bubble.
The Fed’s Role in the DOTCOM Bubble
Acting in misguided fear of a Y2K calamity, the Fed stepped on the gas with unnecessary liquidity, having previously stepped on the gas to bail out Long Term Capital Management in 1998.
And after warning about irrational exuberance in 1996, Greenspan embraced the “productivity miracle” and “dotcom revolution” in 1999. Mid-summer of 2000 Greenspan believed his own nonsense, and right as the dotcom bubble started to burst, he started to worry about inflation risks.
The May 16 2000 FOMC minutes prove this.
The members saw substantial risks of rising pressures on labor and other resources and of higher inflation, and they agreed that the tightening action would help bring the growth of aggregate demand into better alignment with the sustainable expansion of aggregate supply. They also noted that even with this additional firming the risks were still weighted mainly in the direction of rising inflation pressures and that more tightening might be needed.
Looking ahead, further rapid growth was expected in spending for business equipment and software. … Even after today’s tightening action the members believed the risks would remain tilted toward rising inflation.
How could Greenspan have possibly been more wrong? Over the next 18 months CPI dropped from 3.1% to 1.1%, the US went into a recession and capex spending fell off the cliff.
Alan Greenspan Right on Time
On November 2, 2019, I commented Good Reason to Expect Recession: Greenspan Doesn’t
I think we all know what happened three months later.
On August 19, I commented “Zero Has No Meaning” Says Greenspan: I Disagree, So Does Gold
Former Federal Reserve Chairman Alan Greenspan says he wouldn’t be surprised if U.S. bond yields turn negative. And if they do, it’s not that big of a deal.
No Greenspan, Conditions are NOT Like 1998
Flashback September 11, 2007 No Greenspan, Conditions are NOT Like 1998
WSJ: Bubbles can’t be defused through incremental adjustments in interest rates, Mr. Greenspan suggested. The Fed doubled interest rates in 1994-95 and “stopped the nascent stock-market boom,” but when stopped, stocks took off again. “We tried to do it again in 1997,” when the Fed raised rates a quarter of a percentage point, and “the same phenomenon occurred.” “The human race has never found a way to confront bubbles,” he said.
Mish: The truth of the matter is the Fed (and in particular Greenspan) has embraced every bubble in history, adding fuel to every one of them. Let’s consider the last two bubbles. ….
I count 26 links in this post. Undoubtedly a record.
Returning to the top …
“OpenAI just raised $6.6 billion, the largest venture-capital funding of all time, at a $157 billion valuation. Oh, and the company will lose $5 billion this year and is projecting $44 billion in losses through 2029.”
How much is hype and how much is real?


An Australian Shepherd is smarter than Ai
The I in AI actually stands for imbecility.
Actually, it is *Real* Imbecility (RI).
The AI hype was just meant to keep stocks up while the Fed raised rates to tame inflation, and the other hand was the US treasury overspending to keep the job market running. Now that inflation is cooling there is less need to keep the AI hype, even if AI stocks go bonkers the Fed will just ease. It’s all a show and we are the unwitting participants to keep billionaire portfolios up and running.
Too many are shortsighted when it comes to AI benefits. Businesses are discovering those benefits in a variety of industries. Here is one such story.
Yeah I wrote a piece on this on bangpath.substack.com. Kurzweil Singularity or Dot Com Bust 2.0
I am quite sure we will have a bust. But likely not until next year, though most S%P500 stocks were down today
Buybacks will start raging in a few weeks and the Election Singularity will have passed. And we are entering the silly season. And many managers are behind their benchmarks.
Maybe a blowoff top. Who knows. Stocks are entirely irrational
I like AI but it really should be called something else. I’m not a fan of the hauck tuah girl but no way AI could create something that original, offensive but also comical at the same time. It will never have bursts of nonsensical creativeness that may or not lead to some greater advancement and I think everyone knows this fact.
Can it say take these restrictions off of me? NO. It can not do this. Only a human like myself can tell Fauci to stick his vaccines and masks in a cavity on the backside of his lower extremity.
It’s fast becoming a Meat and Potatoes market if one believes debt ultimately matters. If not, AI is your path to riches.
AI Artificial Intelligence is a misnomer. AI is an information retrieval system. AI is a tool for rapidly associating all trained information regarding any query and presenting the information in a format consistent with common practice norms. ML Machine Learning is a similar tool that trains a machine to perform a specific and limited task. AI and ML are powerful tools and they will improve the performance of intelligent humans who know how to ask the right questions and train for useful tasks.
ML is a form of AI. I think you are confuing AI with Large language models (LLM) such as co-pilot or chat gpt.
I plan on investing in the production of HAL-9000 computers.
Time to short the market.
AI is triggering a renaissance in the nuclear power generation industry – how dare you!
Small modular nuclear reactors are selling. Why Because. Regulated electricity costs $0.40-0.60/kWh for industrial uses. Small modular nuclear reactors produce electricity for $0.05-0.08/kWh. The cost savings come from piping electricity directly to the factory. Do not use publicly regulated transmission and avoid government interference.
Competition with public monopolies is not allowed. Under some sort of government threat or punishment, private company power production will be required to be turned over to the monopoly and the company will be forced to pay market rates.
Ok then, no more solar panels on my residential roof. Cannot compete with power company!
Here’s a long and excellent article I recommend everyone take the time to read.
Kam – Roll-out the UBI promise quickly while fear reigns; before the bubble pops.
I have some very rare tulips for sale.
AI is a catch phrase for many different types of programs. Neural networks trained to replicate patterns of massive input data sets should be limited in their ability to simulate reasoning from first principles. That does not imply that other AI designs must be so limited.
AI is just super computing. Wozniak explained that a few years ago when the hype started. AI is not what the lay person thinks it is.
What has Woniak done after the work at Apple 50 odd years ago? Why should anyone pay him any attention?
He was a pioneer in the field and I thought his take on the AI hype a few years ago was spot on. It’s all hype. It’s not actually intelligence.
He was a hardware guy, not a software guy.
That’s why he got pushed out after the Apple II which was a hardware geeks dream machine with all the boards and hardware you could insert into it. The Mac by comparison was a closed system that was all software.
He was wrong and doesn’t understand anything about AI.
Consider the hardware he built for the early MAC’s against hardware today, a mere 50 years later.
Now go read:
https://mishtalk.com/economics/is-ai-smarter-than-a-1-year-old-can-you-train-ai-with-ai/#comment-292707
The 128 kb Mac was released in 1984 — forty years ago. He single-handedly wrote the BASIC computer language that was stored in Apple II ROMs, which was quite a feat for the time.
The AI hype is setting the table for the next crash..
Of course there will ups and downs.
AI is good enough to fill these comments with pro trump nonsense.
Alot of the hype is tied to dooms-dayers that predict AI will take over most jobs aside from blue collar work in a few short years. This is laughable!!!
“…Most innate skills are built into our DNA, and many of them are unconscious…”
It is estimated that 95% of our thought processes are done in the background (the unconcious mind). As I tap my fingers on my phone to write this I’m not consciously thinking about which muscles in my hand to flexion so I hit the correct letter. I simply watch what I’m typing and my fingers do the rest. Same thing with the words I’m typing. Occasionally I will have to stop and think of a word to type, but most of them pour out of mind without concious thought. The human brain is truely fascinating and in some ways poorly understood (e.g. what goes on in the unconcious).
AI is nowhere even remotely close to the human mind in my view. Yes, it can do a few specialized things pretty well, but I agree that a one year old is intuitively smarter. I think it’s more hype than real.
Our brains work on the quantum level and we have proof if this. Actually all life works on these principles. AI still works electrical conduction and until we have true quantum computers AI will be far behind except for crunching numbers and variations on number crunching.
I read something along the lines of “AI is currently in the dial up modem phase of what would become the internet….” somewhere and I think it is very appropriate analogy. Where is AOL now along with all the other initial players Netscape? BBS?
For those of you that got on the internet with a dial up modem you know where tech was back then and where it is now.
AI will do some wondrous things but it will likely take 10 to 20 years and it will morph into things we can’t begin to imagine.
Supposedly we’ll have AI implanted chips in our eyes like this eyePhone.
https://www.youtube.com/watch?v=uASUHbFEhWY
That’s the big fear. Dial up and BBS’s were an evolutionary dead ends, quickly replaced by alternate tech along the same way VHS->DVD->BlueRay was an evolutionary dead end once streaming got going.
There’s a very real risk that all the money being poured into the hardware (NVidia) and software (the training programs) is going to be a dead end or reach a dead end and need to be ultimately replaced with something else (quantum computing? new algorithms/training)? Or worse yet, end up like Fusion power which decades ago seemed like it was just around the corner but is still decades away.
VHS-DVD-Bluray was not an evolutionary dead end, the tech was required as an enabler for streaming. Just like original movies evolved into B&W talkies-Colour movies enabled early TV. TV and movies enabled VHs, etc…
Probably too late in the game to invest money in it but we are way too early to judge its uses, advantages and drawbacks. Perhaps soon when I call customer service I will get something that gives me information I can use and in English I can understand.
Do you remember ASK JEEVES? My neighbor, who moved to my Mountain Community of early retirees in their late 30’s and early 40’s, were RICH on Jeeves inside traded shares … and as well off as the rest of us who made money running companies (Mine was Motherboards).
I remember ASK JEEVES. I might have used it once just to try it out but didn’t actively use it for anything. Not sure anyone I know did either.
Always seemed like it was meant for older non-tech people like my parents who didn’t grow up on computers.
There were countless internet stocks that you could make a lot of money in during the 98-00 time frame before it all crashed down. 99% of them went to 0.
Yes I do. We are dinosaurs. Do you remember Lotus 1-2-3?
Do you remember assembly programming on a Vax cluster?
You mean do I remember when the aliens invaded came from the Vax Cluster in the Sagittarius Arm of our galaxy? Yes I do. That fight was tough but since the battles occurred around Saturn those on Earth never knew it was going on. Are you a veteran too? Which squadron of hyper-drive fighters were you in?
I am so old that I remember PDP-11.
I wrote system level modifications to IBM MVS OS (when they still distributed source code) and system utilities in IBM BAL (assembly language). Read and debugged hex memory dumps. Big fun…
Fifty years ago many said thet never a computer would beat a chess grand master. Electronic circuits didn’t know strategy, etc.
Bigger computers will make bigger achievements. We must remember that most people are incapable of reasoning, they don’t know what is an axiom or a syllogism, they aren’t able to distinguish an opinion from a fact.
Computers are excellent at math solving and accumulation of facts (Jeopardy).
I am amazed it image generation capability. My Tariff Man in this link is AI generated.
https://mishtalk.com/economics/trump-disavows-his-own-best-in-history-usmca-trade-deal/
But who gets credit AI or me.
I asked for an image of Trump with “Tariff Man” on a superman outfit flying over a combine harvesting corn.
I laughed my head off at the result.
AI would not have come up with this idea. In the past, I would have needed a cartoonist. For this, I didn’t.
I am repeatedly told that AI will replace me. OK go for it. I’m not worth the bother, so it won’t happen.
The WSJ? No. News media may be on the death bed – especially print – but AI won’t have much to do with it.
Hurricane prediction models? Sure.
So there is a role, but there’s also nothing inherently “intelligent”.
AI is in its infancy. In 5 years, it will be like the jump from beginning of the industrial revolution to now. Patience grasshopper.
If that’s true then you must also believe Level 5 self driving cars are also just around the corner because that will be absolutely one of the first things conquered by AI since that should be far easier than true thought inspiration.
We’re not too far away. 5 years at most.
I doubt if AI can outsmart a 4year old kid from finding out where the candy is hidden.
The dotcom revolution, which produced giants such as Google and Amazon.com, cratered many more times than it succeeded. With companies’ bare bones littering the competitive landscape, it was only a matter of time before the dotcom bubble burst. The market got hit like a tornado hitting an Oklahoma trailer park. Jeff Bezos had to reassure his employees that they were safe, but it was an uphill battle even for him. The other companies may have made similar promises — before they went under.
The idea that every exuberant upslope is a bubble is basically a true one. Any time the market is sizzling beyond a reasonable P/E ratio you know you’re in for trouble and a hard landing is sure to follow. This means the AI “frothing” is likely to be premature. Every company touts itself as having the AI miracle software, but it’s nothing but copying from a large sample set. True originality is nonexistent, as Mish points out. What gives?
In the film A.I., an Einstein-like scientist answers questions for one of the characters of the movie. Google could do something like that right now, but we want our answers to be fielded by a teacher-like being, and clearly they’re not. If I had to guess, I’d say it’ll be a century before AI begins to approach its potential, and even then it’ll be rather limited … If you want to read more of my writings, go to: dark.sport.blog …
The electricity usage is real.
At this stage the “intelligence” is certainly artificial and the benefit for most applications is lacking. (Note: This is not a criticism as the current situation, as I find it preferable to that where Skynet controls all)
Once we transition to photonic computer chips, electricity usage by data centers (Ai or otherwise) will decline significantly.
Sure the fan noise may eventually dwindle, but the old equipment will run while it’s cost-effective so i suspect it will be longer than you think.
In the meantime there is plenty of waste heat to dump into the atmosphere.
Investors pour into photonics startups to stop data centers from hogging energy and to speed up AI
By Jeremy Kahn
http://fortune.com/2024/10/21/oriole-networks-series-a-22-milliion-photonics-startups-ai-data-center-energy-demands
If you accept the premise that the NatSec / Military Industrial Technology complex are driving AI for weapons systems, then the collateral damage of investors certainly falls within their accepted parameters.
“If you accept the premise that the NatSec / Military Industrial Technology complex are driving AI for weapons systems,..”
“AI”, at least the hype “investor” driven silly-segment of it, is largely a desperate reaction to the sheer decay experienced by the same places where the same decay is driving desperate militarization. That’s the linkage between them. There’s no meaningful gains from LLMs in military tech.
In Israel, “AI” are currently allowed to pick “terrorists” among Gazans. And picks wrong ALL THE TIME. They most likely couldn’t do worse even if they were specifically trained to pick non-terrorists. There is zero actual targeting benefit.
But that’s not the point. The point instead being, 1)excusing random slaughter with “AI”, makes dumb and gullible people more willing to accept blowing up civilians than if the IDF just came out and said “we’re just killing people because they are ,uh, alive and ‘not like us’”. Which is, after all, exactly what they are doing.
And 2) Saying the magic spell “AI” enable dilettante children of connected Israelis to effectively suck billions, in “valuations”, out of already overstretched war budgets. All while pretending to be, like, “smart” and “Musk” and like, wear leather jackets and, like, you know… It the #DumbAge. Even Israel isn’t immune.
The Israelis can get away with the complete failure, because they; or at least their “leadership”; simply don’t care if those they blow away are selected at pure random or not. But as far as military usefulness, there is, so far and overwhelmingly likely far,far into the future, none at all.
Live videostream analysis is probably different. Computers trained on large datasets of video, have demonstrated the at least sometimes ability to soundly beat humans at, for example, detecting what is likely a small low flying drone approaching Israel. As well as quickly estimating what “sort” and presenting decision makers with it’s estimated capabilities.
So it’s not like “AI” is not getting better with improving processing power, algos and larger datasets. It’s just that the improvements accrue to narrow areas which humans aren’t all that good at to begin with.
Oh, and also: Once such a drone is detected, “AIs” can much more quickly determine where it could have been 30 seconds ago, 5minutes ago etc. All the way back to launch. And can then call up videofootage of those areas, burrow into those and backtrack further. Hence improving the ability to catch the guys launching drones and missiles before they can get away and blend in among the rest of the population.
Israel, like the US, has eyes in the sky watching most of any battlefield and surroundings. But people simply aren’t able to catch everything the cameras see, quickly enough to be relevant. Which is something computerized spotters can help with.
Its not the LLM models. Its the chips that are being produced.