Challenging the Dominant Narratives About AI: Parts 1-2

❈ ❈ ❈

[In the wake of ChatGPT’s release in late 2022, artificial intelligence quickly became a global obsession—and a corporate gold rush. But behind the promises of productivity, convenience, and innovation lies a far more sobering reality: AI is accelerating energy consumption, fueling inequality, and embedding mass surveillance deeper into the foundations of society. This is the first part of a four part series, adapted from the Bioneers 2025 panel AI and the Ecocidal Hubris of Silicon Valley, featuring leading voices who challenge the dominant narratives about AI and call for deeper scrutiny of its impacts. We are publishing the first two parts in this issue, the next two parts will be carried in the next issue of Janata Weekly.]

● ● ●

Part 1: Progress at Any Cost? The False Promises of AI

Koohan Paik-Mander

[The author delivers a sweeping critique of AI as the latest frontier of late-stage capitalism. She draws a clear line from data centers to autocracy, and makes the case that the surveillance economy isn’t just dystopian. It’s already here.]

Artificial intelligence—AI—is attracting truly enormous amounts of investment these days. In the two years since the introduction of ChatGPT, hundreds of billions of dollars have been poured into AI, all chasing the kinds of returns that Silicon Valley has traditionally seen. That’s why AI is being pushed down our throats at every turn.

At climate conferences like COP, you’ll see corporate banners claiming AI will cure climate change. At biological diversity conferences, fossil fuel companies tout AI as the solution to species extinction. And politicians across party lines, from Trump to Biden, celebrate AI as the path to U.S. dominance. Techno-utopianism is a bipartisan fever dream.

The entire globalized economy is racing to saturate civilization with AI, no matter the cost. If it means building data centers from sea to shining sea, so be it. If those data centers use so much energy that they foreclose the possibility of ever reaching climate solutions, hey, we’ll just open Three Mile Island and use nuclear power. And if those nuclear power plants take decades to come online? Fossil fuels will suffice in the meantime, because you can’t stop the wheels of progress, right?

This is late-stage capitalism, and AI is the poster child. It’s capitalism eating itself and everything else.

Now, I’m not saying that AI doesn’t have good, useful applications. But we need to start examining the real costs of an AI-driven society. That’s why I’m excited to introduce three incredible thinkers who are helping us do just that: Paris Marx from Canada, Soledad Vogliano from Argentina, and, from the exotic land of Healdsburg, California, Claire Cummings.

Before we get to them, let’s take a moment to demystify AI. It’s not intelligent. It doesn’t think. It’s basically a very sophisticated classification machine that makes predictions based on large volumes of data. Building an AI system typically involves scraping the entire internet, or collecting as much genetic or biometric data as possible, and training the model to recognize patterns. What you get is a fancy machine that makes educated guesses.

And because they’re just guesses, they’re often wrong. The industry doesn’t call them mistakes; it calls them “hallucinations,” a term that conveniently anthropomorphizes the machine. And these errors are baked into the system—you can’t eliminate them. Worse, you often can’t even trace how the mistake occurred. That’s the “black box phenomenon”: millions of calculations happening at once, totally opaque, with no audit trail.

When you think about the fact that Elon Musk has used AI to determine which people and programs are getting cut from the federal budget, it’s infuriating.

The enormous power asymmetry created by the AI economy can’t be overstated. In the past 30 years, digital technology has basically been the most effective means of accelerating inequity and centralizing control, maybe since slavery. Think about it: Of the ten richest people in the world, eight are Silicon Valley tech magnates. This isn’t a coincidence. There’s something inherent in this technology that drives inequality.

AI is what’s known as a “force multiplier.” It amplifies this dynamic of inequity and locks it in. It does this by embedding itself in society’s infrastructure: massive data centers, yes, but also the vast surveillance web of the “Internet of Things”—smart appliances, connected cars, facial recognition cameras, biometric sensors. These aren’t conveniences. They’re surveillance tools. And surveillance, as we know, is a cornerstone of autocracy and fascism.

At the same time, investors are frothing at the mouth to pour billions of dollars into AI. A few years ago, The Economist ran a cover that said, “Data is the new oil.” If that’s true, then AI is the refinery, processing raw data into pure power for a small group of oligarchs.

The AI surveillance infrastructure entrenches a profound power asymmetry in our society. This is nothing to sneeze at. The corporate state knows everything about us, and we don’t know anything about it. These are the conditions for fascism. And the persecution has already begun.

This data surveillance infrastructure serves three main purposes. First, it continually trains AI by harvesting new data. It’s never done learning, and it needs constant input. Second, it builds detailed personal profiles for every one of us—profiles that can be used to control us. Third, those profiles are monetized. You become the product.

Let’s take a look at how that plays out. Say you miss a payment on your car insurance. Your insurance company can remotely deactivate your engine. Say you live in a smart home, and someone who remodeled your kitchen visits you regularly. If that person later commits a crime, it could be associated with your profile, potentially impacting your ability to get a job or a loan.

This isn’t science fiction. It’s the same proximity-based technology used by Lavender AI to determine kill list targets. Tens of thousands of people have been assassinated using this system, simply for being near someone labeled a terrorist.

The poster child for all this? Cambridge Analytica. Remember them? The cyber warfare firm that worked with Steve Bannon to manipulate 230 million Americans in 2016 using AI tools to identify and target persuadable voters. That manipulation helped elect Trump and later passed Brexit. Airbnb now uses similar methods to shut down local legislation aimed at regulating short-term rentals.

And still, people tell me, “This isn’t my problem—I’m not even on social media.” But it is everyone’s problem. If enough people are persuaded by this propaganda, it shapes policy that affects us all.

Sure, AI can be fun. You can make weird videos. But that doesn’t address the core issue: the staggering power imbalance created by embedding surveillance into the very fabric of our civilization just to prop up the AI economy. For me, that’s a deal breaker.

[Koohan Paik-Mander is a journalist and activist. She is a co-founder of the Tech Critics Network and board member of the Global Network Against Weapons and Nuclear Power in Space.]

❈ ❈ ❈

Part 2: The True Cost of AI: Water, Energy and a Warming Planet

Paris Marx

[In this second of a four-part series exploring the unchecked impacts of artificial intelligence, tech critic Paris Marx unpacks the environmental footprint of AI’s infrastructure and asks: Is this the future we really want?]

Let’s go back to November 2022. You probably heard about an app called ChatGPT, released on November 30th. Almost overnight, generative AI was everywhere. It became the dominant topic of conversation, central to headlines, social media, and everyday discussions. The media couldn’t stop speculating about what ChatGPT might mean or how it could reshape society. OpenAI’s CEO, Sam Altman, was tweeting about how fast it was growing, as if rapid adoption alone proved that a massive transformation was underway. And with everyone from tech outlets to your social feed buzzing about it, it felt almost obligatory to try it out just to see what the fuss was about.

That launch was accompanied by a sweeping narrative: this was going to change the world. Something bigger was emerging—something with the potential to be incredibly powerful, maybe even beneficial, but also deeply unsettling.

Proponents of generative AI framed it as a leap in collective human intelligence. They promised a wave of AI assistants, each specialized for different industries—an architecture bot, a science bot, and so on. These tools, they claimed, would revolutionize entire sectors and possibly replace human workers along the way. At the same time, they made sure to pitch a silver lining: AI would vastly expand access to education and healthcare. But let’s be honest: When they talked about people going to AI doctors, they didn’t mean themselves. That was clearly meant for everyone else.

There may well be some positive outcomes from this technology, but there’s also the looming possibility of serious harm. The narrative goes something like this: We must develop AI, even though it might destroy the world. It could lead to the end of humanity. This mix of hype sprinkled with warnings of existential risk doesn’t just shape public perception; it influences how the media talks about AI and how organizations begin to position themselves in response to it.

The tech industry benefits from these grand, speculative conversations. They want us focused on how powerful AI might become someday, rather than examining how it’s already being used right now. It’s more convenient to keep eyes on the future than on the real impacts unfolding in the present.

That’s why it’s so important to understand the foundations of this technology—where it comes from, what it actually is, and why it feels like it’s suddenly everywhere.

So why, in November 2022, did a chatbot like ChatGPT emerge and suddenly dominate the tech conversation? I think there are three key reasons. The first is centralized computing power. Back in 2006, Amazon began building massive centralized cloud computing warehouses—what we now call data centers. Imagine an e-commerce warehouse, but instead of packages, it’s packed wall-to-wall with servers. These enormous facilities require a huge amount of energy and power. Over the past two decades, they’ve expanded rapidly and become essential to the infrastructure behind the internet and the digital platforms we use every day.

So why are we seeing this explosion of AI tools right now? Yes, they require centralized computing power, but they also need something else: massive amounts of data. Companies collect enormous quantities of information from the open web and beyond, feeding it into these models. The result? Tools that seem far more capable than previous versions, not because of magic, but because they’re powered by vastly more data and computing resources.

That’s why data collection is so central. It fuels not just generative AI but also targeted advertising and many other systems. To gather all that data, companies have built a vast surveillance infrastructure, quietly capturing information across nearly every corner of our digital lives.

But there’s a third ingredient here: money. Immense amounts of capital are required to build and scale this kind of infrastructure. Companies such as OpenAI are reportedly losing billions each year in the short term, betting that these tools will become profitable in the long run.

They can afford to take that risk because they’re backed by some of the largest, most valuable corporations in the world. These tech giants are channeling their capital into realizing their particular vision of the future—one that depends on expanding AI, increasing computational power, and rolling it all out at a global scale.

So what do these infrastructures actually look like?

We often talk about “the Cloud” as if it were something intangible—data floating in the ether. But in reality, all that data lives in massive physical facilities that require enormous amounts of power and water to operate.

Hyper-scale data centers are a step beyond the standard data centers that have existed for decades. These facilities are far more massive in both their size and their impact. And they’re growing fast.

In 2018, there were about 430 hyper-scale data centers worldwide. By 2020, that number had jumped to 597. By the end of 2024, it had nearly doubled to 1,136. According to Synergy Research Group, another 504 are currently under construction or in the planning stages, driven largely by the surge in demand for generative AI infrastructure.

Roughly 40 to 50 percent of these centers are located in the U.S., though international growth is accelerating, especially in China. The three biggest players—Amazon, Microsoft, and Google—own about half of them.

As these facilities multiply, so do concerns from the communities where they’re built. One data center requires significant resources, but build five or ten in the same area, and the strain on local power and water systems becomes hard to ignore.

Around the world, more and more communities are beginning to push back, and for good reason. Hyper-scale data centers such as Google’s use an average of 550,000 gallons of water per day, or about 200 million gallons per year, primarily for cooling. Just as a laptop heats up under heavy use, these massive facilities, housing tens of thousands of constantly running servers, generate an enormous amount of heat. That heat has to go somewhere, so water and air conditioning systems are used to keep things cool.

Just between 2022 and 2023, Google’s water use across its data centers rose by 20 percent. At Microsoft, it jumped 34 percent. And that was before the generative AI boom really gained momentum, so it’s safe to say those numbers have only gone up since.

In pursuit of lower costs, many companies are building hyper-scale data centers in more remote or arid regions such as Arizona or parts of Spain—where water is already scarce. These areas often offer more access to renewable energy, which allows companies to market the facilities as “green,” but in reality, this shift puts even greater stress on already fragile water supplies.

Next, of course, is energy use. Globally, data centers currently account for about 2–3% of total energy consumption. In the U.S., that number is closer to 5%, since, as mentioned earlier, a disproportionate number of data centers are located here, and that energy demand is only set to grow. In 2022, data centers, along with crypto and AI infrastructure, consumed about 460 terawatt hours of electricity worldwide—roughly equivalent to the total energy use of France. By 2026, the International Energy Agency projects that number will more than double to 1,050 terawatt hours—about the same as Japan’s total annual energy use. That’s a massive escalation in just a few years.

Ireland is on the frontlines of this issue. Right now, 21% of all metered electricity used in Ireland goes to data centers. In winter, this creates serious strain on the grid, sometimes triggering public alerts that warn residents to reduce energy use or risk outages. As a result, there’s growing pressure to expand what has been a temporary moratorium on new data centers in Dublin. But Ireland’s struggle is just the tip of the iceberg; similar tensions are emerging in communities around the world.

So, where are we headed? Generative AI really began taking off at the end of 2022, and the momentum hasn’t slowed. In late 2024, OpenAI CEO Sam Altman told Bloomberg at the World Economic Forum: “We need way more energy in the world than I think we thought we needed before. We still don’t appreciate the energy needs of this technology.” He went on to say that the world may soon have to embrace geoengineering as a stopgap for climate impacts, unless, of course, we have a breakthrough in nuclear energy. In other words, we’re pushing forward with AI, no matter the energy cost, and if it overwhelms the planet, we’ll just have to engineer our way out of it.

More recently, we’ve seen a major shake-up coming out of China. You might have heard about DeepSeek, a company that’s doing what American AI companies are doing, but far more efficiently. Its emergence rattled the industry, causing U.S. tech stock prices to dip as investors began to question whether this AI boom is really all it’s cracked up to be, and whether the massive buildout by U.S. companies was truly justified? But of course, they’re not backing down.

Not long after DeepSeek’s debut, Sam Altman, Oracle CEO Larry Ellison, and SoftBank’s Masayoshi Son went to the White House to announce a $500 billion investment—code-named Stargate—aimed at building even more massive, nuclear-powered data centers. Meanwhile, Nvidia CEO Jensen Huang responded to DeepSeek’s efficiency by saying that greater efficiency will only drive greater demand, ultimately requiring 100 times more computing capacity. In his view, more efficient models don’t reduce resource use, they multiply it.

But is that actually what’s happening?

We’re starting to see some serious cracks in the foundation. Microsoft has recently canceled a number of data center leases, raising red flags for investors. Even leaders such as Alibaba’s Chairman Joe Tsai have warned that we may be in the middle of an AI data center build-out bubble..

So I’ll leave you with two final questions.

First: Who gets to decide what kinds of technology we build? Should those decisions be left to people such as Sam Altman or Microsoft’s Satya Nadella? Or should we be making these choices democratically, asking whether it really makes sense to invest staggering amounts of water, energy, and materials into technologies whose benefits are still unclear?

And second: How much computation do we actually need? Do we really need to build out endless data centers to support a flood of AI tools with questionable uses—tools that often serve tech companies’ bottom lines more than the public good? These companies rely on constantly growing demand for Cloud services to keep profits up, but that doesn’t mean we have to go along with it. It’s worth asking: how much computing capacity do we truly need? I’d argue it’s probably a lot less than what they want us to believe.

[Paris Marx is a tech critic and author of Road to Nowhere. Both articles courtesy: Bioneers, an innovative nonprofit organization that highlights breakthrough solutions for restoring people and planet.]

Janata Weekly does not necessarily adhere to all of the views conveyed in articles republished by it. Our goal is to share a variety of democratic socialist perspectives that we think our readers will find interesting or useful. —Eds.

Facebook
Twitter
LinkedIn
WhatsApp
Email
Telegram

Also Read In This Issue:

From Swaraj to Subordination: The New India–US Trade Regime – 6 Articles

‘India-US Trade Deal: Five Takeaways from the White House Statements’; ‘Minister Piyush Goyal’s Notes Mentioned “India’s Calibrated Opening of Agriculture”’; ‘The US-India Trade Deal is Unbalanced and Potentially Devastating’; ‘US-India Trade Deal: A Colonial Era-Like Unequal Treaty’; ‘Modi’s Skewed Trade Deal with Trump Demolishes the Idea of Swaraj Envisioned by Dadabhai Naoroji and Gandhi’; ‘Is the Corporate Conquest of Indian Agriculture Complete?’.

Read More »

Democracy Damned by Doctored Data

When growth numbers flatter power, hide job scarcity, and mute rising costs, bad data stops disciplining policy and democracy pays a hefty price, writes the famed economist professor.

Read More »

Chronicle of a Disaster Foretold: The Privatisation of Mumbai’s Bus Services

Mumbai’s public bus transport, the Brihanmumbai Electric Supply and Transport (BEST), once considered a model for bus services in the country, is now in complete shambles. Yet this situation was foreseen and was publicly warned against more than seven years ago, when the authorities took the fatal decision to privatise BEST’s core operations.

Read More »

If you are enjoying reading Janata Weekly, DO FORWARD THE WEEKLY MAIL to your mailing list(s) and invite people for free subscription of magazine.