❈ ❈ ❈
[In the wake of ChatGPT’s release in late 2022, artificial intelligence quickly became a global obsession—and a corporate gold rush. But behind the promises of productivity, convenience, and innovation lies a far more sobering reality: AI is accelerating energy consumption, fueling inequality, and embedding mass surveillance deeper into the foundations of society. This is the third and fourth part of a four part series, adapted from the Bioneers 2025 panel AI and the Ecocidal Hubris of Silicon Valley, featuring leading voices who challenge the dominant narratives about AI and call for deeper scrutiny of its impacts. We published the first two parts in the previous issue of Janata Weekly, here we are carrying the next two parts.]
● ● ●
The Illusion of Control: Deregulation, Legal Loopholes and the Rise of AI
Claire Cummings
In this third installment of this series on AI’s hidden costs, the author traces the roots of today’s AI boom back to the biotech battles of the 1970s, the rise of deregulation under Reagan, and the legal frameworks that continue to prioritize profit over people.
For more than 30 years, I’ve worked at the intersection of law, journalism, and activism, focused in large part on biotechnology and its growing influence on agriculture. That experience has shaped how I understand the deeper forces reshaping our legal systems, our environment, and our humanity.
Over the past five decades, the legal and regulatory systems meant to protect our privacy, health, and environment have been steadily dismantled. Rights we once took for granted have been quietly eroded, often in the name of innovation or efficiency.
Let me take you back to 1975, to a place called Asilomar. Asilomar is a conference center in Pacific Grove, California. That year, scientists developing recombinant DNA technology—using cancer cells and E. coli to cut and splice genes—recognized the risks. What if this technology got out into the world? So they held a conference, but in the end, they chose to self-regulate. They didn’t want government oversight. That decision still shapes our failures to adequately regulate technologies today.
As a result, this work has continued largely without external checks as scientific breakthroughs are rapidly deployed as technologies worldwide without meaningful safeguards. Many of these applications remain essentially uncontrolled experiments.
Just after Asilomar, Ronald Reagan launched his presidential campaign with the now-famous line: “Government is not the solution, government is the problem.” He ran on a platform of deregulation and won.
By 1986, Reagan’s vice president, George H. W. Bush, invited four Monsanto executives to the White House. Together, they crafted a plan to support biotechnology with minimal interference. When Bush became president, that plan was formalized as the “Coordinated Framework.” It gave industry everything it wanted: no new laws, no new oversight, just a patchwork of existing regulations never meant to handle genetic engineering.
Sound familiar?
Today, we’re facing another wave of powerful, poorly regulated technology, AI, with the same pattern repeating. Scientific-sounding concepts are invented to make it all seem safe. The review process is largely voluntary, and the government only knows what the companies choose to share.
I did a little test recently. I asked Google, “Is artificial intelligence regulated in the United States?” And it said yes.
With AI, as with biotechnology, there are no new laws, no meaningful oversight. What Reagan started—dismantling the agencies meant to serve the public—is still happening, and what we’re seeing now is the result: regulatory agencies being gutted and businessmen with clear conflicts of interest being put in charge of public protections.
And even when regulatory agencies do exist and courts agree they have jurisdiction, what we usually get is risk assessment—a cost–benefit calculation, not a real safeguard. It’s not protection; it’s permission.
These technologies are inherently invasive. Think back to the debates around genetic engineering and GMOs. These were products that entered our bodies and ecosystems. They weren’t just ideas; they became part of us, often without our consent.
But the campaigns we ran around GMOs offer a model for how to respond. We didn’t just critique the technology; we organized across sectors and spoke directly to the public. Together, we demythologized the science. We cut through the industry hype and told people what was really going on. And it worked. We helped build public skepticism. Not cynicism, but healthy doubt. The kind of critical thinking we desperately need right now around AI.
And just as important, we offered an alternative. We didn’t stop at opposition. We promoted organic food, sustainable farming, and direct connections between farmers and consumers. People had something to say yes to. That combination—clear critique and offering tangible alternatives—is one of the most powerful tools we have.
Another critical point of intervention is intellectual property (IP). The lifeblood of both GMOs and AI is the ability to patent and profit from information. In the case of GMO patents, it’s life itself—genes, organisms, even biological processes. Over time, IP law has been reshaped to make this not only possible, but standard. This legal structure doesn’t just enable exploitation; it also hides it. Trade secrets and proprietary data make it nearly impossible to know what’s being done, let alone to stop it. That’s how these technologies continue to advance—out of view and without accountability.
Legal reform is one piece of the puzzle, but it won’t be enough on its own. We also need to rethink how we tell the story. Mainstream media tends to embrace whatever’s new and shiny, often without asking hard questions. That’s why it’s critical we create our own channels: spaces rooted in care, caution, and collective values. We did it during the GMO campaigns, and we can do it again.
But at the heart of this moment is a deeper question: How do we resist? How do we confront these technologies and the systems that enable them while staying grounded in our humanity? There’s no single answer, but I hope these stories spark ideas about where you can intervene, and how your voice might help shape what comes next.
Most technologies, going all the way back to the plow, have been designed to replace human effort. That’s their core function. Today, doctors don’t have to conduct patient interviews because AI can do it. Farmers don’t have to weed because they rely on herbicide-resistant crops. These tools aren’t just making tasks easier—they’re replacing people.
This isn’t only a threat to jobs. It’s something much deeper. I want to invite you to consider: What does it mean to be human? What are we losing when we adopt these technologies so readily, without reflection?
I want to share a recent personal experience—something that happened just a couple of weeks ago.
My husband and I live in a senior living center up in Sonoma County, a community that was started by the San Francisco Zen Center. It’s very intentional, rooted in the idea of “beloved community.” We’re deeply committed to living by our principles, taking care of each other, and making decisions together using Quaker-style consensus tools.
Not long ago, two people came by promoting AI tools for senior care. One of the products they introduced was a surveillance system that watches you as you move around your apartment. It tracks how you walk, how steady you are, how active you are, supposedly to learn how you’re doing and, if something seems wrong, to alert someone if you fall or don’t “match” the behavioral data they’ve collected about you.
The second product they presented really broke my heart. It was an artificial intelligence “friend” for people who were lonely.
Of course, we rejected both proposals outright, but it also challenged us to really live according to our principles. If we believe in that concept of beloved community, then we have to ask: How do we truly take care of one another? How do we notice if someone is lonely, or struggling, or in need of support?
The reality is that many care communities will adopt these technologies because they’re underfunded, understaffed, and overburdened. On paper, AI looks like a practical solution. But I’m challenging all of us to go deeper, not just to oppose these tools in theory or try to tweak the legal system, but to call on our own humanity. Ask yourself: What can I do to replace what AI is promising everyone else?
In 1964, I was a student at UC Berkeley, part of the Free Speech Movement. We were young, idealistic, and determined to figure out how real change happens—how to challenge unjust systems while staying true to our deepest values.
The day Mario Savio gave his famous “Rage Against the Machine” speech, we were running a freedom school, kind of like the Occupy movement. We held classes and had conversations about how to create change, how to live in alignment with our deepest values. That’s what was happening in December 1964 on Sproul Plaza on the Berkeley campus.
We didn’t know what we were doing. We were figuring it out as we went. I hope you’re willing to do the same—to step into the unknown, because the stakes are high. We are in a moment of crisis. My generation did what we could. We made progress, but our time is passing.
So how will you rise to meet the challenge? How will you respond to what may be some of the most dangerous and dehumanizing technologies our society has ever seen?
[Claire Cummings is an environmental lawyer and longtime activist.]
❈ ❈ ❈
Farming in the Dark: The Black Box of AI and the Erosion of Food Sovereignty
Soledad Vogliano
In this essay, the author unpacks the expanding role of AI in food systems. Drawing on her work supporting Indigenous and peasant movements and her leadership on digitalization at ETC, Soledad makes the case that AI in agriculture is not just a technical issue, it’s a political one.
Artificial intelligence is quietly but profoundly reshaping the way we grow food and manage biodiversity. While it’s often promoted as a high-tech fix for some of our biggest global challenges, from climate change to hunger, its growing presence in agriculture raises unsettling questions: Who’s really in control of these tools? And whose interests are they designed to serve?
Let’s start with what I consider the elephant in the room: the black box.
The “black box” refers to the opaque nature of many AI systems, especially those built using machine learning. These models can generate highly accurate predictions, but how they arrive at those decisions is often unclear, even to the experts who design them. We can observe what goes in and what comes out, but the inner workings remain hidden. That lack of transparency is one of AI’s most dangerous features—and one of its most overlooked.
Those mysterious algorithms making decisions about everything from crop protection to biodiversity conservation are, in practice, about as transparent as a brick wall.
Imagine a farmer—let’s call him John—standing in his field, facing a pest outbreak. He consults an AI system developed by a far-off tech company for guidance. The system gives him a recommendation. But here’s the problem: John has no idea how that decision was made. Was it based on the latest agronomic data? Was it tailored to his region’s climate or soil? Was it simply designed to push a product? He can’t tell, and there’s no way for him to find out.
That’s the danger of the black box. When AI systems operate without transparency, their decisions may be flawed, biased, or harmful, and users are left in the dark. If John applies a pesticide that degrades his soil or plants a crop unsuited to his land, he may not even know what went wrong, let alone how to fix it.
The black box doesn’t just obscure technical processes; it raises serious ethical questions. In high-stakes fields such as agriculture, healthcare, finance, and criminal justice, this opacity threatens fairness, accountability, and human agency.
This brings us to a second and equally urgent concern: accountability. What happens when decisions that shape lives and livelihoods are made by invisible algorithms that answer to no one? It may sound dystopian, but this is increasingly the world we live in as AI systems are integrated into the foundations of agriculture, health care, finance, and more.
Consider a scenario: an AI system recommends a pesticide that ends up destroying beneficial insects or encourages a crop choice that later crashes in value. Who is responsible? The farmer who followed the advice? The corporation that built the model? The algorithm itself—a piece of software with no awareness or agency?
This is where accountability breaks down. Without transparency, there’s no clear line of responsibility. Tech companies can shrug off failures, claiming the system, not the company, made the decision. Meanwhile, it’s the farmers, ecosystems, and communities who suffer the consequences. It’s like receiving a harmful medical diagnosis, only to be told afterward that “the AI said it was fine.” How can that possibly be acceptable?
The lack of accountability in black box AI isn’t just a technical oversight; it’s a systemic failure. One that protects corporate interests at the expense of human and environmental well-being.
So, who’s really in control of AI in agriculture? The answer probably won’t surprise you. Many of the same corporate giants that dominate agrochemicals and industrial farming—companies such as Bayer, Syngenta, and Corteva—are now at the forefront of AI integration, often in collaboration with major tech firms. Together, they are shaping the digital future of agriculture.
These companies are using AI to steer decisions about what gets planted, how crops are managed, and which inputs are used. Their systems are powered by data they often control, collected from farms across the globe. And they’re embedding themselves deeper into agriculture by layering digital decision-making on top of the same extractive models they’ve long promoted—models reliant on genetically modified seeds, synthetic fertilizers, and pesticides.
The result is a consolidation of power. AI becomes a tool not for democratizing knowledge or supporting sustainability, but for reinforcing the dominance of firms already shaping global food systems. The technologies remain opaque, their logic inaccessible to farmers and the public. What looks like innovation is often a digital power grab that risks locking farmers into systems they can neither fully understand nor easily escape.
And it doesn’t stop there.
Even when AI systems appear neutral, they are not. Algorithmic bias is a growing concern that we ignore at our peril. These systems are trained on data that reflects the values, assumptions, and interests of those who create and control them. In farming, this often means data drawn from industrial agricultural practices, leading to recommendations that prioritize yield and profit over soil health, biodiversity, or local needs, overlooking the ecological and cultural realities of small, diverse, or Indigenous-managed farms.
When corporate interests shape the data, they shape the outcomes, and when those outcomes are flawed or biased, it’s communities and ecosystems that pay the price.
This leads to harmful mismatches. AI may suggest fertilizers or pesticides based on monoculture norms, ignoring local soils, biodiversity, and traditional knowledge that has sustained communities for generations. Yet these outputs are often framed as objective, scientifically validated truths, despite being based on biased inputs.
Which brings us to another critical issue: data ownership, or more precisely, the lack of it. In the world of AI, whoever controls the data holds the power. And right now, that power lies almost exclusively with corporations. Data is often extracted from farmers, frequently without clear consent, and fed into AI models that go on to shape the tools, policies, and economic systems those very farmers must navigate.
This is a form of digital colonialism. Local and Indigenous communities that have long been the stewards of biodiversity and traditional ecological knowledge are seeing their insights extracted, repackaged, and monetized by distant actors. Their knowledge is treated not as a living inheritance, but as raw material to be mined for corporate gain. All of this is buried beneath layers of technical complexity, making it nearly impossible to recognize, let alone resist, the exploitation.
When AI systems are built on appropriated data and biased assumptions, they don’t just miss the mark, they perpetuate inequality, erode sovereignty, and turn culture itself into a commodity.
And then there’s the hype: the narrative that AI is the future, whether or not it actually works. One of the most troubling aspects of AI’s rapid rise is the overwhelming optimism surrounding it. The excitement—amplified by corporate marketing, media headlines, and government endorsements—has triggered a wave of massive investments, often based more on speculative promise than proven performance.
This rush to adopt AI has created artificial demand in sectors such as agriculture, even when the technologies in question remain opaque, unreliable, or misaligned with real-world needs. The more corporations can frame AI as revolutionary, the more funding, influence, and market share they can secure, even if the tools themselves haven’t delivered on their promises and rarely acknowledge their limitations.
Mainstream media often reinforces this narrative, presenting AI as an inevitable solution to pressing global challenges: climate change, food insecurity, and ecological collapse. In doing so, it pushes critical questions to the margins: How effective is AI really? What are its social and environmental consequences? Who benefits, and who bears the cost?
In this environment, the deployment of AI technologies often outpaces our understanding of their impacts, leaving little room for democratic oversight or ethical reflection. That’s why we need to shift the narrative from top-down innovation to bottom-up assessment.
Bottom-up technology assessments are essential if we want AI to serve the public good rather than corporate interests. These approaches center community voices, lived experience, and local knowledge. They prioritize inclusion and transparency and ensure that those most affected by new technologies have a meaningful say in how they are developed, implemented, and evaluated.
Corporate-led evaluations often sideline Indigenous and local communities, undermining their rights to self-determination. In contrast, bottom-up approaches center those voices, allowing assessments to reflect cultural values, ecological knowledge, and sustainability priorities.
But effective bottom-up assessments must go beyond surface-level consultation. They should support community organizing and help local groups build and share their own narratives. These communities offer essential insights into how technologies affect ecosystems, livelihoods, and futures. When they are empowered to define resources and benefits on their own terms, the resulting assessments are far more likely to align with shared values and aspirations.
To conclude, the growing reliance on AI in agriculture and beyond raises serious concerns about transparency, accountability, bias, and power. The opacity of these systems, often referred to as the “black box,” combined with corporate control over both the tools and the data, risks exacerbating inequality and displacing local knowledge.
What we need instead is clear: greater transparency, better data, and inclusive, bottom-up assessments that ensure AI technologies serve all communities, not just corporate interests.
[Soledad Vogliano is an anthropologist, farmer, and Program Manager at the ETC Group. Both articles courtesy: Bioneers, an innovative nonprofit organization that highlights breakthrough solutions for restoring people and planet.]


