Are AI Companies Being Transparent About Their Impact?

AI companies often talk about innovation, speed, and disruption. But when it comes to the environmental impact of their technologies, the conversation gets quiet — or complicated.

As AI tools become embedded in education, business, and creative work, users have a right to know: How much energy do these systems use? Where are the data centers? How sustainable are these models?

In this article, we look at the current state of transparency in the AI industry, what’s missing, and why it matters.

The Silence Around Energy and Emissions

Despite growing awareness around AI’s environmental footprint, many of the most influential companies — including OpenAI, Google DeepMind, and Anthropic — are not providing clear or consistent data on how much energy their systems consume, where that energy comes from, or how much carbon is emitted during training and usage.

This silence becomes more troubling as model sizes grow exponentially and user bases expand globally. Without access to hard numbers, users, researchers, and regulators are left in the dark, unable to assess the real-world impact of technologies that are rapidly becoming ubiquitous.

Why Companies Don’t Share

There are several reasons behind this opacity, most of them strategic. For one, revealing energy consumption might inadvertently disclose details about model size, architecture, or cost — all closely guarded in a highly competitive industry. Then there’s reputational risk: publishing high energy use could challenge a company’s public image as being green or ethical. And finally, the less data companies share, the less likely they are to attract regulation or calls for accountability.

But while these reasons may serve corporate interests, they do little to serve the public good — or the planet.

What Should Be Reported?

Transparency isn’t about revealing trade secrets; it’s about accountability. Just as we expect cars to disclose their emissions and food packaging to list ingredients, AI models should come with baseline environmental disclosures.

This could include the total energy used to train a model, the average energy required to generate outputs, the sources of electricity powering the data centers, any cooling-related water usage, and what, if any, mitigation efforts are in place to offset emissions.

These are not unreasonable asks. They are the minimum requirements for meaningful environmental accountability.

A Few Positive Steps

To be fair, some positive movement is happening — though it’s early, scattered, and inconsistent. Google and Microsoft, for example, have released sustainability targets tied to their cloud infrastructure. Hugging Face has begun publishing energy estimates for select models. And open-source tools like CodeCarbon give developers a way to measure emissions on their own machines.

But without a shared standard or independent oversight, these efforts remain difficult to verify or compare.

Why It Matters

When AI companies don’t disclose their environmental impact, it becomes impossible to make ethical, informed choices. It leaves policymakers with no foundation for regulation. It leaves frontline communities — those near data centers or power plants — with no say in how their land and resources are used. And it makes it nearly impossible for environmental researchers and watchdogs to do their job.

In effect, opacity gives AI development a free pass — even as it expands its infrastructure and global footprint.

Toward Transparent AI

If AI is going to be part of a sustainable, equitable future, it needs to bring its environmental costs into the light. That means creating industry-wide standards for energy and emissions reporting, mandating independent audits of environmental claims, and building public-facing tools to track AI’s impact in real time. It also means supporting independent watchdogs and regulatory bodies that can hold companies accountable for what they build — and how they power it.

What You Can Do

As a user, advocate, or educator, you’re not powerless. You can support transparency by choosing platforms that publish sustainability data, asking companies to clarify their practices, using open-source tools that disclose their footprint, and pushing for climate-aligned AI policy in your region. Collective pressure works — but only if it’s informed.

Conclusion: Without Disclosure, There’s No Accountability

We shouldn’t have to guess how much water a single model consumes, or how many kilowatt-hours power your daily prompts. If AI is going to shape the future, it must do so transparently.

Responsible innovation doesn’t mean perfect systems — it means visible ones. The first step to sustainable AI is simply being willing to look.

References and Resources

The following sources inform the ethical, legal, and technical guidance shared throughout The Daisy-Chain:

U.S. Copyright Office: Policy on AI and Human Authorship

Official guidance on copyright eligibility for AI-generated works.

UNESCO: AI Ethics Guidelines

Global framework for responsible and inclusive use of artificial intelligence.

Partnership on AI

Research and recommendations on fair, transparent AI development and use.

OECD AI Principles

International standards for trustworthy AI.

Stanford Center for Research on Foundation Models (CRFM)

Research on large-scale models, limitations, and safety concerns.

MIT Technology Review – AI Ethics Coverage

Accessible, well-sourced articles on AI use, bias, and real-world impact.

OpenAI’s Usage Policies and System Card (for ChatGPT & DALL·E)

Policy information for responsible AI use in consumer tools.

Aira Thorne

Aira Thorne is an independent researcher and writer focused on the ethics of emerging technologies. Through The Daisy-Chain, she shares clear, beginner-friendly guides for responsible AI use.

Previous
Previous

Can AI Be Sustainable? Moving Toward Ethical Tech Use

Next
Next

Who Bears the Environmental Burden of AI?