The AI Footprint: The Path to Sustainable AI

Dr Paul Dongha, Head of Responsible AI and AI Strategy, NatWest Group, says efforts to manage AI’s ‘risks and rewards’ must not inadvertently create new environmental and economic problems.

The global trajectory of data centre expansion, fuelled by AI demand, is on a collision course with environmental sustainability goals. Recent reports confirm that this rapid expansion is a systemic shock to global energy and water systems. A recent McKinsey report estimated that up to 70% of data centre growth up to 2030 will come from demand for AI.

This demand has generated immediate concerns across three critical environmental factors: Co2 emissions stemming from the operation of AI models in data centres often reliant on fossil fuels; the need for electricity, straining grids globally; and the alarming depletion of water resources necessary for essential cooling.

Scale of the impact

The energy demands associated with the AI boom have reached a staggering scale. The International Energy Agency (IEA) 2025 report predicts that global data centre electricity consumption will more than double by 2030, rising from an estimated 415 terawatt-hours (TWh) in 2024 to a staggering 945 TWh – which exceeds the total current annual electricity consumption of Japan.

The consequences of this rapid energy surge extend directly to climate impact. The sheer growth of demand is outpacing the rate at which power grids can decarbonise and integrate renewable sources.

Goldman Sachs Research forecasts that by 2030 approximately 60% of this increasing electricity demand will be met by burning fossil fuels. This dependence is projected to cause a massive increase of 215–220 million tons of Co2 emissions, representing about 0.6% of global energy emissions.

The climate cost begins long before an AI model is deployed. Research from the University of Massachusetts reveals the immense environmental cost of training large models: creating one sophisticated AI model can emit more than 258 tonnes of CO2 – a volume equivalent to approximately five times the lifetime emissions of an average American car.

Beyond operational emissions, AI infrastructure demands enormous volumes of water for cooling. Operating high-density chips requires substantial cooling to regulate the temperature of components during processing.

A 2023 report detailed that a single Microsoft data centre in Iowa consumed 11.5 million gallons of water every month over a two-year period.

Managing AI risk, standards and processes for sustainability

The management of AI’s environmental footprint is moving decisively from voluntary corporate social responsibility to mandatory compliance and core business strategy. This pivot is enforced by emerging global regulations and the strategic convergence of financial discipline and environmental optimisation.

Regulation and compliance

Governments and international bodies are introducing regulatory mechanisms designed to quantify and constrain AI’s resource impacts.

The EU AI Act’s sustainability provisions implicitly encourage responsible sourcing and circular economy practices within the AI hardware supply chain.

At the US federal level, legislation such as Senate Bill 3732 proposes requiring the Environmental Protection Agency (EPA) and the Department of Energy (DOE) to establish a standardised system for reporting on and mitigating the negative environmental impacts of AI.

New AI-specific standards are emerging, such as the ISO/IEC 21031:2024, which defines the Software Carbon Intensity (SCI) Standard. This methodology provides a verifiable, quantitative calculation for the carbon emissions rate of a software system, ensuring that environmental impact measurement moves beyond qualitative disclosure to quantitative, actionable metrics.

Strategic convergence

To succeed in this new regulatory landscape, organisations must integrate environmental considerations directly into their operational and financial governance structures. This strategy is achieved through the coupling of FinOps (Financial Operations) with GreenOps (Green Operations).

FinOps involves monitoring, documenting, and controlling resource allocation decisions to align spending with business outcomes.

GreenOps specifically integrates sustainability into IT operations, focusing on optimising cloud usage, reducing carbon emissions, and promoting sustainable practices across the entire IT lifecycle.

When these two frameworks are integrated, they create a powerful dual-value proposition. GreenOps activities are inherently cost-savers: practices like deleting unused storage volumes, rightsizing virtual machines, and consolidating workloads not only drastically shrink the operational carbon footprint but also result in reduced monthly cloud bills.

Smart operations

The most immediate tool for greening existing AI workloads is Carbon-Aware Scheduling. This optimisation strategy systematically prioritises the execution of energy-intensive tasks during periods when regional electricity grids source a higher proportion of low-carbon or renewable energy.

By analysing application usage patterns and identifying flexible, non-critical tasks, AI operations teams can dynamically shift processing to locations or times when carbon intensity is low. This approach not only reduces the carbon footprint directly but also often yields financial benefits, as energy prices frequently correlate with carbon intensity.

Supply chain transformation and circular economy

Operational efficiency must be matched by a rigorous strategy to address embodied carbon – the emissions released during the production, transport, and construction of AI hardware and data centre facilities.

The transition to a circular economy can be reinforced by stringent procurement and supply chain practices. Organisations should incorporate circular economy principles by selecting third-party vendors with robust environmental certifications and verified sustainable practices. Tools that offer Lifecycle Assessments (LCAs), such as Clarity AI, are essential for identifying and mitigating embodied carbon risks associated with new hardware acquisition.

Securing sustainability in AI innovation

Sustainable AI is no longer a technological aspiration but a strategic business imperative driven by compliance frameworks. This requires a shift away from reactive offsetting toward systemic architectural redesign.

Leading organisations will focus on integrated solutions that address efficiency across the entire value chain. These include the institutionalisation of FinOps/GreenOps frameworks to align financial incentives with carbon reduction, and the immediate implementation of carbon-aware scheduling to maximize the use of renewable energy.

By viewing sustainable practices not as a cost burden but as essential risk mitigation, early adopters are positioned to set industry benchmarks, secure regulatory resilience, and gain a definitive market advantage with eco-conscious customers and policy makers.

About the author

Dr Paul Dongha leads Responsible AI and AI Strategy at one of the UK’s largest banks, NatWest Group, and is the co-author of Governing the Machine: How to navigate the risks of AI and unlock its true potential (Bloomsbury Business). He ensures AI innovation drives value while rigorously adhering to regulations and protecting customers. Paul holds a PhD in AI agent technology from the University of Manchester.

Previous articleAmazon backs new wave of UK climate innovation
Next articleNet Zero 50 2025 Celebrated at the House of Lords