It’s time for tough conversations on AI regulation, says Jen Gennai
Currently, there is no comprehensive global regulatory framework addressing the climate risks of Artificial Intelligence (AI). Some regulations exist, such as the EU’s Digital Green Deal, though they’re not specific to AI. The EU AI Act and global standards organizations like ISO, IEEE, and the OECD which have established standards for AI, focus on ethics, transparency, and safety, with little emphasis on environmental impact.
AI, powered by data centres, consumes massive amounts of energy and water. The numbers are staggering: according to Deloitte, by 2030, the global data centre industry is projected to emit 2.5 billion metric tons of CO₂— a figure equivalent to the carbon sequestration achieved by 115 billion trees grown over a decade.
Frameworks like the EU’s Corporate Sustainability Reporting Directive (CSRD) and Taxonomy impact AI by reshaping the data centre industry through required disclosures of energy use, emissions, and climate resilience efforts. Certifications such as LEED (Leadership in Energy and Environmental Design) and BREEAM (Building Research Establishment Environmental Assessment Method) set benchmarks for sustainability, while programs like Energy Star and ISO 50001 emphasize energy management and operational efficiency.
However, global compliance remains uneven with fragmented regulatory structures, lack of localized incentives, continued reliance on fossil fuels, and insufficient enforcement resources. Global standards frameworks, often designed for developed markets, impose benchmarks that can feel unattainable for regions grappling with immediate economic priorities. Without tailored solutions, localized incentives, and stronger enforcement, sustainability frameworks risk alienating transition economies. And while voluntary corporate efforts offer promise, they are insufficient without robust regulatory support.
Recommended by LinkedIn
In the absence of regulations and rules, some leading technology companies are pioneering sustainability efforts. Meta’s data centres operate on renewable energy, achieving net-zero emissions with 80% less water usage than average facilities. Microsoft is reimagining infrastructure with wood-based data centres, cutting embodied carbon by up to 50%, while integrating advanced liquid cooling systems to meet soaring energy demands.
Google’s sustainability initiatives illustrate both the potential and the challenges of voluntary corporate responsibility. Guided by principles such as “Be Socially Beneficial” and “Do Not Cause Overall Harm,” the company halted Cloud support for oil and gas extraction and increased investments in nuclear energy. Yet, its carbon footprint surged by 48% between 2019 and 2023, underscoring the difficulties of scaling sustainable practices without regulatory intervention.
The environmental challenges posed by AI are emblematic of a broader tension between technological advancement and sustainability, and of global versus regional needs. The AI climate problem will only get worse: data centres’ electricity demand is projected to double by 2030 due to the additional load that artificial intelligence poses, according to the Electric Power Research Institute. Tailored AI climate risk strategies that consider the unique economic and geographic conditions of each region are essential to driving meaningful change. Only by aligning innovation with sustainability can AI and the data centre industry thrive within the limits of our planet.
To effectively address the climate risks posed by AI, mandatory carbon emissions reporting, with specific call-outs for AI, energy efficiency standards, incentives for sustainable AI, international collaboration, and increased corporate responsibility, specific to AI, are necessary. In the meantime, potential mitigations include powering data centres with renewable energy and being conscious of water usage and access, developing more efficient algorithms, using smaller data sets, investing in energy-efficient hardware, and for commercial partners and end users to use AI only when necessary or when it provides significantly greater returns.
Global Director of Responsible Business | reporting to C-Suite | future-proofing businesses with competitive sustainability strategies
3wDr Julia Stamm and this is precisely why the work of WSFR.AI is so critical, spotlighting those female innovators across the AI spectrum, from ethics to democracy to education. #AI