Technology and ESG

By Megha Kumar  

For businesses and public services, generative AI provides the means to transform their operations, making them more efficient and effective. Yet its high resource consumption means developers and users must find ways of deploying it sustainably – otherwise, they may not only struggle to achieve their ESG goals but also hamper global efforts to decarbonise. 

Across economies and industries, AI can contribute enormously to achieving net zero targets. For instance, it can help utilities allocate resources more efficiently; reduce energy wastage in hard-to-green transportation industries such as shipping; and optimise irrigation in agriculture. But just as it could be a boon as business and industry take steps to lower their carbon footprints, there is a potential downside. 

Developing and deploying this technology is resource-intensive, contributing to carbon emissions regardless of how it is used. Large amounts of electricity and freshwater are used in the development and deployment of AI models, not least to cool servers in data centres. Research suggests that by 2027, AI servers’ annual electricity use globally could be equivalent to what Argentina or Sweden use individually in one year, while annual global AI freshwater use may be comparable to that of Malaysia in recent years. 

So, increasingly, as countries focus on achieving net-zero climate targets, organisations developing and using AI could find themselves under increasing scrutiny from regulators, investors, environmental groups and governments over their resource usage. Therefore, as corporate decision-makers consider how they might exploit AI’s huge potential, they will simultaneously need to determine how that deployment can be made as green as possible.  

Consideration of AI’s environmental impact should really begin now because electricity consumption is currently a big contributor of carbon emissions – only about a third of global power come from renewables. And droughts in key AI supply chain jurisdictions, such as Taiwan, together with the high level of water stress worldwide, mean concerns over water consumption, too, are very much at the top of the sustainability agenda. 

Senior executives, then, must weigh what they want to achieve operationally through AI models against the ecological impact they are likely to have, and their ESG targets. Striking this balance could involve decisions around the size of models deployed, the location of data centres and energy efficiencies. 

Large vs niche models 

 Major AI users exploring ways of advancing industry, science and medicine have little option but to use large, resource-hungry models to synthesise and analyse vast amounts of data. Ultimately their goal is to create AI models capable of affecting important breakthroughs in the way we live and work. Yet while this is their priority, they will care about their resource consumption and try, as much as they can, to reduce it.  

Most medium- and large-sized businesses, however, do have more of a choice over whether to deploy multi-purposes systems or niche models. Since the latter have narrower requirements, they are trained on smaller sets of data and therefore have smaller resource needs. So, for example, niche models might be appropriate for single function departments, such as human resources, with narrowly defined tasks. On the other hand, cross-functional areas of business operations, like sales and marketing and global supply chain risk management, may require bigger systems, given their need to leverage data from multiple sources.  

Water and power issues 

For developers and users of AI, an important consideration is the freshwater requirements for both the manufacture of specialised AI semiconductors and the cooling of data centres. Taiwan is an important source of the former but has also been prone to drought. This is an important driver of the ongoing diversification of the AI semiconductor supply chain; geopolitical tensions between the US-China over Taiwan being the other key driver. 

Similarly, the need to cool servers could encourage firms to deploy their data centres in less water-stressed locations. At first sight, desalination would seem to offer a solution (untreated seawater has corrosive properties). Regions with significant desalination capacity such as the Middle East might appear attractive locations for data centres, but desalination is energy-intensive and expensive with a high carbon footprint, unless powered by renewables, which are still in their infancy in the region. 

Western economies are striving to build more renewables into their energy mix as they pursue net-zero targets. However, the process is slow, not least because grid-scale battery storage capacity has still some way to go to comprehensively address the variability of solar and wind power. As mentioned earlier, renewables currently constitute only a proportion of the energy sources driving electricity-generation globally, at a time when AI development is advancing rapidly and demanding ever more power. So, while organisations looking to develop and use models in a sustainable way will try to draw on more green power, its expense and availability will militate against this. The challenge is made harder by the fact that there are very few regions with both plentiful renewable energy and renewable fresh water. 

Decision-makers will, essentially, need to optimise their use of resources. In other words, try to achieve an equilibrium between consumption and carbon emissions, ensuring the best possible environmental–friendly outcomes. But this process won’t just be about trade-offs, such as offsetting high fossil fuel use in one part of your business with high renewable energy use in another. It should be combined with more efficient resource use, in particular water, with statistics showing that the same AI model can have different water-efficiency rates across the world.  In addition, efficiencies can be made in other ways, for example upgrading computer hardware to lower energy consumption and developing alternatives to freshwater for cooling data servers.  

Transparency for consumers 

In much the same way as a multinational drinks company might pass on environmental responsibility to consumers by reminding them that their cans are recyclable and can be disposed in recycling bins, developers of AI models can advise their customers how to make better decisions by setting out how much energy is expended when they do certain tasks. Text-based inquiries use less energy than image-based ones. Greater transparency would help AI users make more informed decisions. 

Unless they haven’t already done so, businesses and public services should conduct an audit of the resources required to drive their AI systems, then look to see whether some of the actions I have outlined might help them reduce their carbon emissions and freshwater use.  

As AI use continues to grow, and its heavy resource footprint is better understood, organisations will no doubt come under greater scrutiny over their deployment of the technology. Not only by stakeholders keen to see ESG compliance but also environmentalists and policymakers wanting to ensure that AI is advancing decarbonisation.

About the Author

Megha KumarMegha Kumar leads Oxford Analytica’s work on the Global Technology, Media and Telecommunications (TMT) sector.