The AI boom is reshaping the digital landscape – and nowhere is this more evident than in data centres. As models grow larger and more compute-intensive, the infrastructure supporting them must evolve rapidly. Racks that once consumed 30kW are now pushing beyond 100kW, with 1MW+configurations quickly becoming a reality.
Meeting these demands is not just about scaling capacity – it is about rethinking the entire power delivery chain for maximum efficiency and sustainability. This includes the adoption of high-voltage DC (HVDC) architectures, which offer improved power distribution efficiency and reduced conversion losses, and the introduction of liquid cooling technologies, which are essential for managing the thermal loads of ultra-dense compute environments.
Introducing liquid cooling
As AI and high-performance computing (HPC) workloads continue to exceed the thermal limits of traditional air-cooled systems, liquid cooling has become the go-to solution for dealing with the excessive heat produced in high-density compute environments. Direct-to-chip cooling technology isenabling significant improvements in energy efficiency and sustainability including zero water consumption, over 50% decrease in cooling power usage, and an 18% decrease in total power consumption. At scale, these gains could prevent an annual emission of 35 million metric tons of CO2.
While AI workloads dominate the headlines, they still represent only a fraction of global data centre power usage. Most of the infrastructure remains dedicated to CPU-based workloads, which also benefit from advanced cooling solutions. Innovations like standalone liquid cooling systems are designed to integrate seamlessly into existing data centre environments, delivering immediate performance and efficiency improvements without requiring major infrastructure changes. Increasingly, hybrid cooling approaches – combining air and liquid cooling – are being adopted to optimise thermal management across diverse workloads, striking a balance between efficiency, scalability, and flexibility.
Rethinking power architectures
As AI workloads continue to push data centre rack densities higher, operators are rethinking how to meet energy consumption demands with maximum efficiency, scalability, and sustainability. A key innovation gaining traction is the shift toward HVDC architectures, particularly +/- 400 V DC and 800 V DC systems, along with the solid-state technologies they enable. The configurations have the potential to reduce conduction losses, enable longer cable runs, and minimise the conversion stages required to step power down from the grid. The result is improved overall system efficiency and reduced thermal management complexity.
Another advancement sees a power shelf system optimised for next-generation AI platforms, achieving 97.5% efficiency at half-load. By leveraging native 800 V DC input, the system streamlinespower conversion and reduces the need for intermediate AC stages. This improves energy efficiency while simplifying infrastructure design, allowing for denser deployments and faster scalability within the same data centre footprint.
The future of power
Looking ahead, the next generation of data centre infrastructure will be defined by radical efficiency gains – not just in energy consumption, but in physical space and system design. Traditionally, converting incoming AC power to a DC voltage usable at the chip level required several conversion steps, each of which negatively impacted energy efficiency. But we are seeing higher DC voltages emerge in the data centre, including the 800 V DC that allows direct connection to renewable energy systems and +/- 400 V DC required for the integration of capacitive energy storage systems (CESS), battery energy storage systems (BESS), and microgrid applications.
Condensing power conversion into a single solid-state transformer not only produces efficiency gains, but it significantly reduces the square footage required for electrical equipment – which, when combined with higher density compute and cooling, could mean up to 90% smaller data centre footprints by 2030. This opens new paths to profitability: saving on construction costs or increasing compute capacity in the existing envelop by adding more racks. We call this the convergence of power and IT, and it is a welcome step forward.
Building a scalable, sustainable AI infrastructure
As AI continues to evolve, so too must the infrastructure that powers it. From liquid cooling to HVDC systems and solid-state transformers, the future of data centres lies in integrated, efficient, and sustainable design. The next few years will be critical in shaping how we compute – and how responsibly we do it.