Depends what you mean by AI…
…but I guess currently the main difference is that the dot-com boom basically was the internet. It was huge but we thought it was small.
The AI boom is small, but we think it’s huge.
We think it’s going to replace, invade, and take over everything all at once. It’s not. The models take work to be constrained. The training data takes time to find and cleanse. The applications and use cases have to be married to good data sets and modeled to functional outcomes.
We thought the internet was just people with journals, blogs, and geocities pages… It turned out to be ebay, youtube, reddit, instagram, tiktok, Facebook, Amazon.
Right now we think AI will be in everything doing everyone’s jobs …but it will probably be a bunch of smaller tools. Translators, cancer finders, copy editors, face scanners, better security cameras, better search engines…
The applications are still uncertain, but seem kind of smaller than we first thought. Ubiquitous (probably) but not larger than life.
The dot com boom was larger than life.
But I think the biggest difference is that AI will most likely need the internet. The dot com thing WAS the internet.
“AI” is kind of subordinate, or seems smaller in some sense. It’s more like lots of small changes. They’ll each make life a little easier, but it will all feel separate rather than some giant face in the sky that speaks intelligence into society or controls the world.
This is spot on.
And on the apparently contradictory “AGI” development, I’ll add the Big Tech kingpens like Altman/Zuck/Musk are hyping that while doing the exact opposite. They say that, turn around, cut fundamental research, and focus more on scaling plain transformers up or even making AI ‘products’ like Zuck is. And the models they deploy are extremely conservative, architecturally.
Going on public statements from all three, I think their technical knowledge is pretty poor and they may be drinking some of their own kool-aid. They might believe scaling up transformers infinitely will somehow birth more generalist ‘in the sky’ intelligence, when in reality it’s fundamentally a tool like you describe, no matter how big it is.
They differ in one major way: the economy was straight booming in the late 90s, and not in just a “rich people passing more money around amongst themselves” way. That dot com boom did end with a bunch of startups going bust, but it was also part of the process of building the internet we have today. Lots of hardware, lots of cabling, lots of towers, lots of people employed in making, installing, configuring, maintaining. In the end, the dot com boom created something.
This “AI” thing is a lot more “pouring barrels of money into literal incinerators”.
The biggest difference is that the Dot Com bubble was strongly focused on tech companies going public and pumping small cap stock prices up.
The AI bubble on the other hand is almost entirely being built by private equity, with the largest players all privately held but with large cap stock companies holding substantial stakes. Rather than a bunch of small companies getting pumped up stock prices of many multiples of their debut price then falling to zero, instead we have large cap stock companies bumping up their value substantially, but not by major multiples, while the actual value of the biggest players in AI are all speculative and can’t be invested in by retail investors.
This is all by design, the financiers of the AI boom are well aware that a public stock oriented rush into AI for retail investors would lead to massive speculation and an inevitable crash, instead with all the retail money going into large cap stocks they hope to capture that value and funnel the money into buying long term gains by making sure that those big companies have some stake in the “winning” private companies. When the first big AI companies go bust, they will be consolidated into their investor groups and harvested for innovation to transfer over to the winners.
Overall this strategy seems sound to avoid a major retail stock bust, but isn’t wothout its own risks, for example if open source AI ends up winning out and the biggest private players fall flat they could become toxic assets and drag down the large cap stocks, and thereby the Indexes and Index funds in favor of leaner players. In the current landscape, that would mean Microsoft going down with OpenAI while Apple goes up, Apple is waiting on the sidelines with a huge cash warchest, ready to buy.
best answer so far, thanks
Allegedly Google, early in this craze:
“We have no moat, and neither does OpenAI”
https://semianalysis.com/2023/05/04/google-we-have-no-moat-and-neither/
Ultimately the dotcom fantasy kind of panned out. A few tech companies have massive control over society now, with what is essentially cloud/internet business. They have a moat.
…But with the AI bubble, I think folks are underestimating how fast and low the “race to the bottom” is.
As random examples:
-
Look at something like Nemotron 4B, which makes a lot of mundane ‘AI’ data processing people assumed to be big and power hungry (with these data centers) basically free: https://huggingface.co/jet-ai/Jet-Nemotron-4B
-
Look at GLM 4.6. I can run it on my Ryzen/3090 desktop, for free, and for the first time, I feel like it’s beating Claude and Gemini in some stuff, at 7 tokens/sec: https://huggingface.co/Downtown-Case/GLM-4.6-128GB-RAM-IK-GGUF
These are both literally from the past day.
And all this is accelerating. Alternate attention is catching on (see: Qwen 80B, Deepseek experimental, IBM Granite, probably Gemini). bitnet is already proven and probably next, and reduces the cost of matrix multiplication by an order of magnitude or two.
In other words, AI as a “dumb tool” is rapidly approaching “so cheap, it’s basically free to run locally on your phone,” and you don’t need all these megacorp data centers for that. There’s no profit in it. It’s all fake planning, and the ML research crowd knows it. That’s much more extreme than the dotcom hype, where cloud hosting/dev cost is kind of a fundamental thing.
-
The AI craze seems to be much more disconnected from reality. There is absolutely no functional product there, no real use, nobody wants it, and there is no way to turn a profit. This one is also using exponentially more resources, electricity and freshwater. Deutsche bank is warning that it propping up the American economy. Feels like the dot com bubble on steroids
My daily use of said tool disagrees with your assertion.
“For essay writing task”
I’m not using it to write essays. I’ve been programming far beyond my normal capabilities.
You’re asking Lemmy about AI is like asking /the_donald about antifa. They have no idea it’s just years of circle jerking about this topic disconnecting the community from reality. Your answers here will be vastly different then normal spaces
There are ML hobbyist/tinkerers here. I’ve seen it on zip, db0, itjust.works.
But it all gets downvoted, and I think those folks (me included) tend to keep their heads down on “AI” topics.
You don’t see a lot of true AI bros here, but they honestly don’t know squat and wouldn’t give a good answer either.
so, what’s the truth?
I said it in too many words, but everything’s magnified. ‘AI’ is an extremely useful tool, yet the current overhype is even more insane, and has muddied everything. It’s like a caricature of the dotcom bubble, but real.
Lemmy users will tell you both AI is useless while also saying it’s a threat to multiple industries.
It’s similar in that there’s hype and speculation. What isn’t similar is that the Dot Com era didn’t have as much advancement and use cases as these companies do today.
But Lemmy had some weird Afro turfing early on. A few users really made it there mission to post all the worst yellow biased articles they could find and that became self feeding as other users only saw crazy doom and gloom that it’s coming fer yet yer jobs or assaulting your women.