Can AI Be Sustainable? Moving Toward Ethical Tech Use
The future of artificial intelligence doesn’t have to be extractive, opaque, or unsustainable. While AI’s current environmental footprint is significant — from carbon emissions to water use — there is growing momentum to build technologies that align with both climate and ethical values.
But is sustainability in AI really possible? And if so, what does it look like?
In this final article in our environmental series, we explore what a sustainable AI ecosystem could mean, who’s working toward it, and what role users, developers, and institutions can play in shifting the system.
The Case for Sustainable AI
For AI to be sustainable, it has to work within — not against — the ecological limits of the planet. That means reducing its energy and water demands, sourcing materials ethically, and being transparent about environmental impact. It also means avoiding harm to the communities that provide the infrastructure, labor, and land that make AI possible.
Sustainability in AI is not just a matter of engineering. It’s a question of design, policy, and cultural will.
What Sustainable AI Could Look Like
A truly sustainable AI system would be:
Efficient: Models that achieve meaningful results without unnecessary size or compute demands.
Transparent: Open data about emissions, energy use, and sourcing practices.
Green-powered: Data centers running on verified renewable energy.
Equitably built: Hardware sourced with respect for labor, ecosystems, and Indigenous land rights.
Locally aware: Infrastructure decisions made with input from the communities they affect.
This is a high bar — but not an impossible one.
Promising Models and Approaches
Several organizations and research groups are working on more sustainable forms of AI. Some focus on building smaller, task-specific models instead of massive general-purpose ones. Others are developing energy-aware training tools that let researchers track emissions in real time. Open-source initiatives often prioritize lightweight models that can run locally, reducing reliance on centralized data centers.
There’s also a movement toward model distillation and parameter efficiency — technical strategies that deliver comparable performance with less compute.
While these developments are early-stage, they signal a shift in priorities: from scale-at-all-costs to purpose-built, resource-conscious design.
The Role of Policy and Public Pressure
Sustainable AI won’t emerge from corporate goodwill alone. It requires:
Regulatory standards for emissions, water use, and disclosure.
Investment in green infrastructure as a public good.
Incentives for efficient model development, not just bigger, faster benchmarks.
Public pressure to prioritize long-term planetary wellbeing over short-term market wins.
Policymakers have a chance to shape AI development the same way they’ve shaped auto emissions, building codes, or renewable energy subsidies. The challenge is keeping up with the speed of deployment.
What You Can Do — and Why It Matters
You don’t need to be a developer to influence AI sustainability. You can:
Choose tools and platforms that disclose their environmental practices.
Support open models designed for lower resource use.
Use AI intentionally — not endlessly.
Share awareness with your networks and communities.
Individual action won’t solve systemic issues — but it can signal demand. Enough signals create movement.
Conclusion: Building Toward Better
AI is not inherently unsustainable. But the way we currently design, train, and deploy it often is. If we want technology that supports a liveable future, we need to align development with ecological limits, human dignity, and long-term responsibility.
A sustainable AI system isn’t just possible — it’s necessary.
And the work starts now.
References and Resources
The following sources inform the ethical, legal, and technical guidance shared throughout The Daisy-Chain:
U.S. Copyright Office: Policy on AI and Human Authorship
Official guidance on copyright eligibility for AI-generated works.
UNESCO: AI Ethics Guidelines
Global framework for responsible and inclusive use of artificial intelligence.
Partnership on AI
Research and recommendations on fair, transparent AI development and use.
OECD AI Principles
International standards for trustworthy AI.
Stanford Center for Research on Foundation Models (CRFM)
Research on large-scale models, limitations, and safety concerns.
MIT Technology Review – AI Ethics Coverage
Accessible, well-sourced articles on AI use, bias, and real-world impact.
OpenAI’s Usage Policies and System Card (for ChatGPT & DALL·E)
Policy information for responsible AI use in consumer tools.