IES Consulting recently partnered with a global data centre operator, providing dynamic simulation services to future-proof its facilities and build resilience against the demands of AI-driven workloads. Leveraging advanced digital twin technology and climate-specific dynamic simulation, the project delivered measurable efficiency outcomes, demonstrating an industry leading PUE of 1.16 and a possible >90% relative reduction in water usage for one existing site.
In this latest blog, Michael Pollock, Senior Consultancy Manager – HVAC Modelling, delves into the key challenges facing data centres when optimising cooling strategies amidst escalating AI-driven computational demands, sharing key insights and learnings from the project.
The widespread adoption of artificial intelligence (AI) presents a considerable challenge for data centre operators as they strive to keep pace with the escalating demand for computational resources. This surge in demand places significant pressure on existing data centres, which frequently struggle to accommodate the increased cooling loads required to support high-intensity AI workloads. As a result, the cooling requirements for both current and future data centres must be carefully evaluated and addressed in the planning and design stages. Proper consideration of these cooling needs is essential to ensure that facilities can operate efficiently and reliably, even as computational demands continue to rise.
IES collaborated with a global data centre operator whose core commitments include sustainability. Accordingly, the organisation prioritises optimisation of operational efficiency, reduction of CO2 emissions, and effective management of water resources.
Data centres require large energy-intensive cooling systems due to their significant equipment needs, leading to high water consumption. To improve energy efficiency, operators apply different techniques such as bringing in outside air (free cooling), direct or indirect evaporative cooling, dry coolers, and water-side economisers. The effectiveness of these approaches depends partly on the local climate; a method that is successful in one region might not be suitable elsewhere.
IES created a digital twin of the operator’s current facility to analyse essential design elements within data halls and determine optimal simulation approaches. An important aspect of data hall design is implementing an effective containment strategy to control heat generated by server equipment.
Hot-aisle/cold-aisle containment is a widely adopted airflow management technique in data centres, designed to enhance cooling efficiency by preventing the intermixing of hot and cold air streams. In standard configurations, server racks are positioned in alternating rows, with equipment air intakes oriented toward the "cold aisle" and exhausts directed toward the "hot aisle." Cold air is delivered to the cold aisles by computer room air conditioning (CRAC) units or air-handling systems, while the hot air expelled from the servers is isolated within the hot aisles and returned to the cooling plant.
From an energy-efficiency standpoint, containment enables data centres to maintain higher and more consistent supply-air temperatures. With precise control of server inlet conditions then cooling systems avoid overcooling. This allows the use of higher chilled-water temperatures and increases economiser operation, resulting in enhanced efficiencies for chillers, fans, and pumps. Notably, fans do not need to compensate for losses due to air mixing and leakage, reducing fan energy consumption.
The IESVE model of the data hall was divided into separate cold aisles, server racks, and hot aisles to accurately simulate containment, with airflow managed within each zone. Modelling the entire space as a single zone, as it typical of many traditional approaches, would average out the significant temperature differences between containment areas, leading to inaccurate efficiency predictions.
Power Usage Effectiveness (PUE) and Water Usage Effectiveness (WUE) are two key metrics commonly employed to evaluate the operational efficiency and sustainability performance of data centres. PUE is calculated as the ratio of total energy consumed by the data centre to the energy consumed by IT equipment. A PUE value of 1.0 signifies an optimal condition in which all incoming power is dedicated exclusively to IT loads, with no additional consumption for cooling, power distribution, or lighting. In reality, the PUE will range from approximately 1.2 for highly efficient hyperscale facilities to >1.6 for older or less optimized environments. PUE serves as an essential metric for data centre efficiency, offering a clear and quantifiable means of determining how effectively a facility converts electrical input into useful computational output.
WUE serves as a valuable counterpart to PUE, measuring water consumption efficiency at data centres primarily related to cooling operations. WUE is calculated by dividing a facility’s total annual water usage by the IT equipment’s annual energy consumption, generally expressed in litres per kilowatt-hour. This metric includes all categories of water usage, including cooling towers and humidification systems. As data centres expand into regions facing water scarcity or increased regulatory constraints, monitoring WUE has become increasingly significant. It is essential for operators to recognise that optimising one efficiency metric may have implications for others, necessitating a holistic approach to performance management. By evaluating metrics such as PUE and WUE together, operators can achieve a comprehensive understanding of data centre efficiency to ultimately enhance operational performance while mitigating environmental impact.
The data centre developed for the digital twin was located in a cool dry climate zone therefore ideally suited for a Direct Evaporative Cooling (DEC) system. In this design, cooling was provided directly from outdoor air whenever the ambient temperature is below the cold aisle supply air temperature setpoint (75°F/23.9°C). When the ambient temperature rises above this setpoint, evaporative coolers are used to cool the supply air temperature to the setpoint.
The implementation of direct evaporative cooling significantly enhanced the building’s energy efficiency, as the data centre operates without mechanical cooling systems. This eliminates electrical consumption associated with chillers, cooling tower fans, as well as chilled water and condenser water pumps. While evaporative cooling does require some water usage, the overall WUE averaged low, given that this cooling method was necessary for only approximately 15% of the year.
A comparative study assessed the potential building performance without free cooling, instead using a standard CRAC system with chilled water coils supplied by both water-cooled and air-cooled chillers.
The water-cooled chiller demonstrates greater operational efficiency than the air-cooled chiller, as evidenced by achieving a lower PUE (1.29 versus 1.38). However, this efficiency comes at the cost of increased water consumption due to the necessity of cooling towers for heat rejection. In contrast, the air-cooled chiller does not require an evaporation process for heat dissipation and therefore incurs no water usage attributable to cooling. The following chart presents a comparison of the PUE and WUE metrics for this data centre across three design options, illustrating the current design outperforms conventional systems in these key performance areas.

The use of a water-cooled chiller with cooling towers requires approximately 270 million litres of water annually, which corresponds to the yearly consumption of ~2,000 households. While this level of water usage may be justified in scenarios where higher efficiency is desired and water scarcity is not an issue, it underscores the importance of evaluating metrics beyond PUE for achieving truly comprehensive and efficient data centre design.
The chart below shows that both water-cooled chillers and the Direct Evaporative Cooling design use large amounts of water. However, giving priority to free cooling can cut water usage by over 90% in this type of climate.
![]()
While DEC was an effective design solution for this scenario, there is no certainty for warmer, more humid climates. In such cases, evaporative cooling could be ineffective and there is a risk that cold aisles might exceed their intended design criteria.
ASHRAE defines climate zones based on the number of Heating Degree Days, Cooling Degree Days and the volume of rainfall per year. The table below describes these climate zones. The map below presents the distribution of data centres across US states, accompanied by ASHRAE Climate Zone information. While data centres are located throughout the nation, the highest concentrations are found in Virginia (Climate Zone 4A) and Texas (Climate Zone 2A).

|
Climate Zone |
Temperature Description |
Humidity Description |
|
1A & 1B |
Very Hot |
A - Humid B - Dry C - Marine |
|
2A & 2B |
Hot |
|
|
3A, 3B & 3C |
Warm |
|
|
4A, 4B & 4C |
Mixed |
|
|
5A, 5B & 5C |
Cool |
|
|
6A & 6B |
Cold |
|
|
7 |
Very Cold |
|
|
8 |
Subarctic |
The plot below visualises the HDD and CDD for Ashford and Dallas, two cities with the highest concentration of US located data centres.

Dynamic simulation enables designers to evaluate design strategies and innovative solutions that optimise energy performance while minimising risk in the final design. The figures below provide a comparison of the psychometric range for server inlet conditions using a DEC approach in Climate Zones 5B and 2A. In Climate Zone 5B, the DEC system consistently meets recommended criteria throughout the year. However, implementing this same design in Texas (Climate Zone 2A) would result in cold aisle conditions exceeding the recommended parameters for a substantial portion of the year, presenting a considerable design risk.


Historically, data centres have relied primarily on air-cooled systems for thermal management. However, as IT equipment power densities escalate, driven by AI demand, the conventional air-cooled data centre design is approaching both practical and economic limitations. Elevated rack densities result in substantially higher heat flux, necessitating increased airflow and fan energy, as well as narrower temperature differentials, all of which compromise efficiency and operational resilience. Liquid cooling technologies, including rear-door heat exchangers, direct-to-chip methods, and immersion cooling, provide a more efficient approach to heat dissipation owing to the significantly greater thermal capacity of liquids compared to air. The adoption of these solutions facilitates greater compute density within a reduced physical footprint, lowers overall cooling energy requirements, and accommodates the integration of next-generation processors that cannot be effectively or sustainably cooled through air-based methods alone.
This rapidly evolving design environment with data centres presents both challenges and opportunities for design teams looking to develop efficient and sustainable solutions, which can be captured through dynamic simulation.
Check out our data centres page or connect with our consultants today to discuss your project requirements.