Exposing the Big Con: The False Promise of Artificial Intelligence

02/03/25
Author: 
Martin Hart-Landsberg
Robot

 March 2, 2025  

The leading big tech companies are working hard to sell artificial intelligence (AI) as the gateway to a future of plenty for all. And to this point they have been surprisingly successful in capturing investor money and government support, making their already wealthy owners even wealthier. However, that success doesn’t change the fact that their AI systems have already largely exhausted their potential.

More concerning, the uncritical and rapidly increasing adoption of these systems by schools, businesses, the media, and the military represents a serious threat to our collective well-being. We need to push back, and push back hard, against this big tech offensive.

The Big Con

According to tech leaders like Elon Musk, we are only years away from building sentient computers that can think, feel, and behave like humans. For example, as reported by Business Insider,

“Tesla CEO Elon Musk said in a [February 2025] interview with Dubai’s World Governments Summit that the economic returns of artificial intelligence investments will be seen in humanoid robots.

“Speaking to the UAE’s AI minister … Musk said that humanoid robots and deep intelligence will unlock the global economy’s potential by providing ‘quasi-infinite products and services.’ …

“‘You can produce any product, provide any service,’ Musk said of humanoid robots. ‘There’s really no limit to the economy at that point. You can make anything.’ …

“‘Will money even be meaningful? I don’t know; it might not be,’ he said, adding that robots could create a ‘universal high-income situation’ because anyone will have the ability to make as many goods and services as they want.”

Musk recently rebranded Tesla as an AI robotics company and, in a January earnings call, said that the company will soon be building thousands of Optimus robots which will likely earn it “north of $10-trillion in revenue.”

And Tesla is not the only company pursuing this strategy. According to a Bloomberg article, “Apple and Meta are set to go toe-to-toe” in competing to build “AI-powered humanoid robots.” The article continues:

“It’s the stuff of science fiction – robots at home that can fold your laundry, bring you a glass of water, load up the dishwasher or even push the kids on the swing in the backyard. For years, that future seemed far off. But it’s getting closer, with help from some of the world’s largest technology companies.”

If the stock market is to be taken seriously, a lot of investors are true believers. The so-called Magnificent Seven stocks – Apple, Microsoft, Google parent Alphabet, Amazon.com, Nvidia, Meta Platforms, and Tesla – have been responsible for almost all the market’s gains over the past several years. At the beginning of 2023, the seven accounted for 20 percent of the S&P 500. A year later it was 28 percent. It is now 33 percent.

Getting Real

The 2022 release of ChatGPT by OpenAI marked the start of public engagement with AI. It was free, easy to access, and required no technical knowledge to use it. And while it remains the most widely used chatbot, other companies have launched their own competing products, including Tesla, Amazon, Meta, Google, and Microsoft. However, although these chatbots can perform a variety of tasks, there is nothing “intelligent” about them. And despite heavy spending to boost their speed and computing power, they do not represent a meaningful step toward the creation of artificial general intelligence (AGI) systems with the ability to think, learn, and solve problems on their own.

Existing AI systems, like ChatGPT, rely on largescale pattern recognition. They are trained on data, most of which has been scraped from the web [without permission, or regard for copyright], and use sophisticated algorithms to organize the material when needed, in line with common patterns of use. When prompted with a question or request for information, chatbots identify related material in their database and then assemble a set of words or images, based on probabilities, that “best” satisfies the inquiry. In other words, chatbots do not “think” or “reason.” Since competing companies draw on different data sets and use different algorithms, their chatbots may well offer different responses to the same prompt.

At the same time, all chatbots suffer from the same weaknesses. Their systems need extensive data, and scraping the web means that they cannot help but draw on material that is highly discriminatory and biased. As a result, chatbot responses can be compromised by the worst of the web. One example: AI-powered resume screening programs have been found to disproportionately select resumes tied to White-associated names. And because of their complexity, no one has yet been able to precisely determine how a chatbot organizes its data and makes its word selection. Thus, no one has yet devised a way to stop chatbots from periodically “hallucinating” or seeing non existing patterns or relationships, which causes them to make nonsensical responses.

The BBC recently tested the ability of the leading chatbots to summarize news stories and found that the resulting answers contained significant inaccuracies and distortions. Here is what the BBC News and Current Affairs CEO Deborah Turness had to say:

“The team found ‘significant issues’ with just over half of the answers generated by the assistants. The AI assistants introduced clear factual errors into around a fifth of answers they said had come from BBC material.

“And where AI assistants included ‘quotations’ from BBC articles, more than one in ten had either been altered, or didn’t exist in the article.

“Part of the problem appears to be that AI assistants do not discern between facts and opinion in news coverage; do not make a distinction between current and archive material; and tend to inject opinions into their answers.

“The results they deliver can be a confused cocktail of all of these – a world away from the verified facts and clarity that we know consumers crave and deserve.”

This is certainly not a record that inspires confidence. For its part, the BBC recommended a “pull back” on AI news summaries.

No Light At the End of the Tunnel

Aware of these shortcomings, tech companies argue that they can be overcome by increasing the amount of training data as well as the number of parameters chatbots use to process information. That is why they are racing to build new systems with ever more expensive chips that are powered by ever bigger data centers. However, recent studies suggest that this is not a winning strategy.

As Lexin Zhou, the co-author of a study published in the journal Nature, explains: “the newest LLMs [Large Language Models] might appear impressive and be able to solve some very sophisticated tasks, but they’re unreliable in various aspects.” Moreover, “the trend does not seem to show clear improvements, but the opposite.”

One reason for this outcome, says Zhou, is that the recent upgrades tend to reduce the likelihood that the new systems will acknowledge uncertainty or ignorance about a particular topic. In fact, it appears that the changes made were motivated by “the desire to make language models try to say something seemingly meaningful,” even when the models are in uncertain territory.

The resulting danger is obvious. In fact, according to Lucy Cheke, a professor of experimental psychology at the University of Cambridge, “Individuals are putting increasing trust in systems that mostly produce correct information, but mix in just enough plausible-but-wrong information to cause real problems. This becomes particularly problematic as people more and more rely on these systems to answer complex questions to which they would not be in a position to spot an incorrect answer.” Using these systems to provide mental health counseling or medical advice, teach our students, or control weapons systems, is a disaster waiting to happen.

Some Perspective

Tech leaders confidently assert that AI will lead to revolutionary changes in our economy, boosting productivity and majority well-being. And if we want to reap the expected rewards we need to get out of their way. But what can we really expect from the massive AI related investments projected for the coming years?

One way to ground our expectations is to consider the economic consequences of the late 1990s tech-boom, which included the growing popularity and mass use of computers, the internet, and email. This pivotal period was said, at the time, to mark the beginning of the Information Age and a future of endless economic expansion. As for the economic payoff, the data on post-adoption trends in US labor productivity is not encouraging. As the International Monetary Fund reports,

“Labor productivity gains slowed from the range of 3 – 3.5 percent a year in the 1960s and 1970s to about 2 percent in the 1980s. In the late 1990s and early 2000s, the US economy experienced a sizable but temporary productivity boom as productivity growth rebounded to 3 percent. Since about 2003, productivity gains have been lackluster, with labor productivity slowing to an average growth rate of less than 1.5 percent in the decade after the Great Recession.”

Yes, these technologies and the many companies and products they spawned have changed how we work and live, but the economic consequences have been far from “revolutionary,” if by that we mean significantly improving the lives of most people. Worker earnings and economic growth have followed labor productivity in a similar downward trajectory. And given the limitations of AI systems, it is hard to imagine that their use will prove more effective in producing strong productivity gains and higher earnings for workers. Of course, that isn’t really the main point of the effort. Tech companies have made a lot of money over the years and they stand to make a lot more if they succeed in getting their various AI systems widely adopted.

The Fightback

In exchange for their promised future of “quasi-infinite products and services,” tech companies are demanding that we help finance – through tax credits, zoning changes, and investment subsidies – the massive buildout of energy and water hogging data centers they need to develop and run their AI systems. There is no win in this for us – in fact, Bloomberg News reports that Microsoft’s own research into AI use shows a disturbing trend: the more we trust AI, the less we think for ourselves.

“The researchers found a striking pattern: The more participants trusted AI for certain tasks, the less they practiced those skills themselves, such as writing, analysis and critical evaluations. As a result, they self-reported an atrophying of skills in those areas. Several respondents said they started to doubt their abilities to perform tasks such as verifying grammar in text or composing legal letters, which led them to automatically accept whatever generative AI gave them.”

And who will get blamed when the quality of work deteriorates or hallucinations cause serious mistakes? You can bet it won’t be the AI systems that cost billions of dollars.

So, what is to be done? At the risk of stating the obvious, we need to challenge the overblown claims of the leading tech companies and demand that the media stop treating their press releases as hard news. We need to resist the building of ever bigger data centers and the energy systems required to run them. We need to fight to restrict the use of AI systems in our social institutions, especially to guard against the destructive consequences of discriminatory algorithms. We need to organize in workplaces to ensure that workers have a voice in the design and use of any proposed AI system. And we must always ensure that humans have the ability to review and, when necessary, override AI decisions. •

This article first published on the Reports From the Economic Front website.

Martin Hart-Landsberg is Professor Emeritus of Economics at Lewis and Clark College, Portland, Oregon. His writings on globalization and the political economy of East Asia have been translated into Hindi, Japanese, Korean, Mandarin, Spanish, Turkish, and Norwegian. He is the chair of Portland Rising, a committee of Portland Jobs with Justice, and the chair of the Oregon chapter of the National Writers Union. He maintains a blog Reports from the Economic Front.