Beyond the Hype: My Honest Reflections on Gen AI

Beyond the Hype: My Honest Reflections on Gen AI
Photo by Igor Omilaev / Unsplash

It's been more than a year since the AI crazy hype and now that the dust has settled, we are seeing some clear signs where AI is.

We witnessed smart chat platforms like ChatGPT and Claude, services for generating images and videos like DALL-E and Midjourney, GitHub Copilot, writing assistants, and other applications.

I didn't want to write about it for a long time but I want to give my honest opinion about the current Gen AI state in this article.

As a software engineer, in the last 10-20 years, I have experienced a lot of these hype cycles:

  • Java and Object-oriented programming (OOP)
  • Agile and Scrum
  • Blockchain
  • Crypto & NFTs
  • Low code/No code platforms
  • Microservices

Each of these promised something revolutionary but it felt short.

All of these left a mark in some way or another. Some are still used for solving today's problems, but most of the hype wasn't justified.

Current hype - Gen AI

From time to time I glance over some financial reports and I stumbled upon this Goldman Sachs AI report titled "Gen AI: Too Much Spend, Too Little Benefit?".

This report presents some of the most critical analyses of generative AI that I read.

The report describes 3 key aspects of AI:

  1. Productivity gains: Likely more modest than expected
  2. Return on investment: Potentially far below current projections
  3. Energy requirements: So substantial that power companies may need to boost infrastructure spending by nearly 40% in the next 3 years, primarily to meet demand from major tech players

What I like about this report is the fact that it comes from an investment bank with a typically profit-driven approach.

Since they don't care about some CEO's feelings but only about potential financial gain, Goldman's AI stance is particularly noteworthy.

It suggests growing anxiety about generative AI's future, with a note: the longer it takes for AI to generate profit, the higher the expectations for its eventual returns become.

While reading this report, I encountered many similar self-thoughts about AI so I decided to write this article.

I will explain some of the key points from the report.

More? Better? Best In Class? Bestest?

The report features insights from MIT's esteemed economist Daron Acemoglu, who has consistently questioned expectations about AI's economic impact.

Acemoglu asserts that truly transformative changes are unlikely within the next decade. He points out that generative AI's potential to affect global productivity is limited, given that many human tasks require multifaceted skills and real-world interactions - areas where AI advancement remains challenging.

This perspective from a respected economist, signals a realization in both academic and financial circles that AI's practical, near-term impact may be overestimated.

Acemoglu raises a critical point that cuts to the heart of AI progress claims: What does it truly mean to "double AI's capabilities"?

This question exposes a key weakness in the AI enthusiasts' argument.

They often assert that scaling up models with more data and computing power will inevitably lead to more capable AI. However, this leaves a crucial question unanswered:

How does this theoretical increase in capability translate to practical improvements in specific tasks, like enhancing customer service?

This critique highlights a significant flaw in the AI hype. The assumption that larger, more powerful language models will automatically solve complex real-world problems oversimplifies the challenges of applying AI to practical scenarios. It shows the gap between theoretical AI capabilities and their practical, task-specific applications.

The notion of "more" in AI capabilities remains ambiguous. While improvements in generative AI often translate to faster processing, the concept of "better" lacks clear definitions or metrics.

This ambiguity partly explains why leading language models like ChatGPT and Claude, despite their advancements, haven't transcended their primary content generation function. Anthropic may declare Claude as "best-in-class" but this designation primarily reflects incremental improvements in speed and accuracy.

While these enhancements are noteworthy, they fall short of the revolutionary leap often promised by AI enthusiasts. The gap between current achievements and the transformative potential of AI remains substantial.

Training

A looming crisis in AI development, often overlooked, is the escalating demand for training data.

This challenge has the potential to significantly slow down AI progress.

A recent study published in Computer Vision and Pattern Recognition highlights a critical issue: achieving linear improvements in model performance requires exponential increases in data volume.

Translated to the normal human language, to improve at a linear scale, you need an exponentially large amount of data for training.

This trend implies not just a data acquisition challenge, but also a substantial computational burden. The financial implications are huge.

Anthropic CEO Dario Amodei estimates that current AI models in development may require up to $1 billion for training. Looking ahead, he projects that within 3 years, we could see models with training costs reaching $10 to $100 billion - a figure larger than Croatia's GDP.

This exponential cost curve raises serious questions about the sustainability and accessibility of advanced AI development.

It suggests that future progress in AI may be constrained not by technological limitations, but by the pure economic costs of training increasingly sophisticated models.

Room for optimism?

The report includes a contrasting view from Goldman Sachs' Joseph Briggs, who maintains an optimistic outlook on generative AI's economic impact.

However, his arguments raise eyebrows:

  1. Briggs suggests AI will boost the economy by replacing workers in some jobs, allowing them to find work in other fields.
  2. He predicts significant cost savings from "full automation of AI-exposed tasks," but this relies on the unproven assumption that AI will fully automate these tasks.
  3. Briggs often mixes AI and generative AI, using the terms interchangeably when they are distinct concepts.
  4. He interprets recent generative AI progress as a sign of emerging "superintelligence", which I think is very premature

I've included this section of the report to address occasional comments from my colleagues and friends that I don't present both sides of AI. However, there's a reason I generally focus on one perspective.

The pro-AI hype arguments often rely on speculative assumptions. They tend to predict future developments without solid evidence, such as the idea that language models or image generators will somehow become "intelligent".

These predictions usually stem from misunderstanding how current AI works. People confuse the ability to generate text or images with true artificial intelligence or consciousness.

I aim to ground the discussion in what AI can do now, rather than what some hope or believe it might do in the future. This approach helps separate realistic expectations from overly optimistic hype speculation.

Current AI models, like GPT, lack the genuine reasoning and creative thinking abilities that human brains possess. They're essentially very advanced pattern recognition systems, not truly intelligent beings.

The problem is that AI models performing well on "Abstraction and Reasoning Fcous" tests aren't demonstrating real intelligence.

Instead, they're just processing millions of examples of humans solving these tests. It's like someone memorizing answers to an IQ test and crushing it, rather than understanding the problems.

This approach to testing AI is even less meaningful than cramming for an IQ test.

It doesn't measure true intelligence or problem-solving ability. Instead, it just shows how well the AI can mimic human responses to specific problems it's seen before.

This highlights a crucial misunderstanding about current AI capabilities.

These systems are incredibly good at pattern matching and data processing, but they're not actually "thinking" in any meaningful sense.

AI in everyday jobs

Taking orders at a fast food restaurant might seem straightforward to humans, but it's quite challenging for AI.

This is because AI, like language models, doesn't truly understand words - it just predicts patterns in text.

Wendy's recently tried an AI system called "FreshAI" for taking orders at some of its restaurants.

However, the results show how far AI still has to go:

  1. The AI needs human help for about 15% of orders. That's one in every seven orders where a person has to step in.
  2. Customers report repeating themselves multiple times for the AI to understand their order.
  3. The system interrupts customers if they don't speak quickly enough.

These issues highlight a key problem with current AI: while it can handle simple tasks in controlled environments, it struggles with the variability and complexity of real-world interactions.

Understanding context, accents, background noise, and little details of human communication remains a significant challenge for AI systems.

This example shows that despite the hype, AI still has a long way to go before it can reliably replace humans in simple customer service roles.

McDonald's recently stopped using their AI ordering system in over 100 restaurants, probably because it made big mistakes. For example, one customer accidentally ordered hundreds of chicken nuggets!

Overall, these AI ordering systems haven't been very successful yet.

Also, even if AI could take orders perfectly, humans still need to cook the food at places like McDonald's or Wendy's. Despite all the excitement and money put into AI, it's not replacing jobs like people said.

I'm tired of hearing that robots will take over all our jobs soon. The truth is, that right now, AI isn't good at completely replacing jobs.

Instead, it's making certain work tasks easier and cheaper to do. This might hurt people just starting in creative jobs, who use those tasks to build up their work experience.

Many CEOs and bosses are forcing AI to replace real workers, but this doesn't mean AI is better. It just shows that these bosses don't value the skills of their workers or care about giving customers quality work.

Jobs like copywriting and making concept art are much more valuable than what AI can do.

But in today's economy, many CEOs don't understand or respect this type of work. So they're using AI to produce lots of content that looks and sounds the same.

It's gotten so bad that some companies are now hiring copywriters to make their AI-generated text sound more like a real person wrote it.

This shows how AI isn't replacing skilled workers - it's just creating new problems that humans have to fix.

When CEOs replace workers with AI, they're missing something important.

Hiring a real person isn't just about getting a finished product. It's also about sharing the responsibility and risk of creating that product. For example, when you hire a copywriter to create content, you're getting more than just their time and skills.

You're getting:

  1. Their ability to listen and understand what you want
  2. Their unique creative style
  3. Their willingness to work with you, making changes until you're happy
  4. Their years of experience, which helps them solve problems you might not have thought of

You're paying them to take on the main job of creating the content, so you don't have to worry about it.

These things come from the writer's real-life experiences, both in their work and outside. You can't teach an AI these skills just by feeding it lots of data.

This is why real human workers are still very valuable, even with AI around.

Power and costs

There's an interesting part in a report that talks about Jim Covello.

He's an expert on computer chips at Goldman Sachs. Jim has a strong opinion about the current excitement around AI. He thinks it's overblown and not realistic:

  1. Building and running all the AI systems will cost about a trillion dollars in the next few years.
  2. He asks, "What problem worth a trillion dollars will AI solve?"
  3. He points out that using expensive AI technology to replace low-paying jobs doesn't make sense. It's the opposite of how technology has typically been used in the past 30 years - replacing something expensive with a cheaper variant.

Jim says the current AI trend might not be as valuable or revolutionary as many think. He also challenges some common ideas about AI.

Many people compare AI today to the early internet. Covello disagrees. He says the internet replaced expensive things. AI is very expensive and isn't designed to solve complex problems.

Some say AI will get cheaper over time, like other tech. Covello calls this "rewriting history". He thinks the tech world is too confident that AI costs will drop significantly.

He explains that computer chips got smaller, faster, and cheaper (Moore's Law) because companies like AMD competed with Intel. This competition drove progress.

But with AI, one company (Nvidia) controls most of the market for the special computer chips AI needs. Without real competition, there's less push for AI to become cheaper or better. Nvidia follows the concept "during a gold rush, sell shovels", so this situation suits them.

So, Covello is pointing out that AI might not follow the same path of becoming cheaper and more useful as other technologies have in the past.

He says AI costs are extremely high.

Even if costs drop, they'd need to fall by a huge amount to be affordable. He compares this to the early Internet days when businesses used expensive servers. But he says AI costs are much worse than that. Plus, we'd need to upgrade the whole power grid to keep AI growing.

Covello says big tech companies feel forced to join the AI race because of all the hype. This means they'll keep spending lots of money on AI. But he doesn't think AI will bring in much new money for these companies.

Why?

He believes AI won't make workers smarter. It'll just help them find information faster. Also, since any company can use AI, no one can charge more for their products just because they use AI.

In simple terms: No one's making real money from AI because it doesn't help companies earn extra. Being more efficient is good but doesn't set a company one step forward from others.

He adds that big tech companies like Google and Microsoft will make extra money from AI. But it won't be the huge profits they might be hoping for, especially considering how much they've spent on AI in the last two years.

The main problem is that AI is supposed to be smart and make people smarter. But right now, it's just helping people find information faster.

Still, AI isn't creating new jobs or new ways to do jobs. It's not clear how it will make more money in the future.

Covello makes a tough final point: The longer we wait without seeing big, useful AI applications, the harder it will be to keep believing in AI's potential. Big companies might keep supporting AI development for now, but this might change if the tech industry faces economic troubles.

He predicts that if we don't see important ways to use AI in the next year or two, investors might start losing interest in AI.

The report also mentions power problems:

  • Big tech companies like Microsoft, Amazon, and Google use much more power now than before. By 2030, they might use enough power to run several U.S. cities.
  • In Northern Virginia, where many tech companies have data centers, the power grid might need to double its capacity in the next 10 years.
  • Power companies haven't had to handle such a big increase in power use for about 20 years. This is a problem because building new power systems takes a long time and involves a lot of rules and paperwork.
  • The number of power projects waiting to connect to the power grid grew by 30% last year. These projects now have to wait 3 to 6 years to get connected.

Verdict

So far, AI hasn't done many of the big things people hoped for:

  1. It hasn't created any must-have applications (like the invention of the smartphones did)
  2. It's not making money for companies.
  3. It's not creating new jobs or changing industries in a big way.

Some people still defend AI by saying OpenAI (a leading AI company) has a secret, amazing technology we don't know yet. They think this secret tech will prove all the doubters wrong.

But here's the thing, it probably doesn't exist.

Mira Murati, the tech leader at OpenAI, recently said that the AI models they're working on in their labs aren't much better than what's already out there for everyone to use.

So, my take on this is:

  • There's no magic trick.
  • There's no secret thing that Sam Altman (OpenAI's leader) will show us soon that will change everything.
  • There's no amazing tool that big tech companies like Microsoft or Google are about to release that will suddenly make all this AI hype worth it.

People are getting excited about AI, but it's crucial to understand its limits.

Today's AI, known as generative AI, won't turn into the super-smart machines we see in movies. I am talking about those fictional AIs, like Samantha from the movie "Her". Because AI would need to be conscious - something no current technology can achieve.

Real intelligence requires perfect information processing, true understanding, and decision-making ability based on experience.

These are all separate skills that AI currently doesn't have.

Current AI only processes information during training. It doesn't actually "learn" or "understand" in a human way. Instead, it gives answers based on math and probability, not genuine comprehension.

While today's AI is useful, it's far from the promised conscious, super-smart helper. The gap between what AI can do and what's being advertised is huge, and people need to recognize this difference.

Generative AI isn't going to change the job market much.

Why?

Because it can't do most jobs, and it's not even that great at the few things it can do.

Sure, AI can help make some tasks faster and easier. But here's the catch: the technology behind AI is super expensive to run. This creates a big problem for AI companies like OpenAI.

At some point, these companies will probably have to do one of two things:

  1. Raise their prices a lot
  2. Or risk going out of business

This is because they're using a very costly technology.

So while AI might be a helpful tool for some tasks, it's not the job market revolution some people think it is. The high costs and limited AI abilities mean it's unlikely to drastically change how we work or what jobs are available.

People ask me how the AI hype bubble will burst. I think it'll happen in stages:

First, investors will get upset.

They'll likely punish a big company like Microsoft or Google for spending tons of money on AI without making much profit.

Then, we'll see a series of small problems.

For example, Figma, a design tool, had to stop its new AI feature because it copied Apple's weather app design. This happened because the AI was probably trained on data that included Apple's app.

The final blow might be a major AI company failing. It could be a Cognition AI, which got $175 million to make an "AI software engineer" but had to fake a demo to show it working.

These companies are built on technology that's not making money. When one of them collapses, it could burst the whole AI bubble.

The key point is that many AI companies spend a lot of money on non-profitable tech. This can't last forever, and when it falls apart, it could bring down the whole AI hype.