As the physical requirements of high-performance computing evolve, the limits of air cooling have become the limits of innovation. DARKNX is removing those barriers across our entire portfolio of data center projects by partnering with Accelsius, the leader in advanced thermal technologies.
Together, we are moving away from traditional infrastructure to the leading edge of cooling architecture, ensuring that every DARKNX facility is purpose-built to handle the extreme heat profiles of next-generation GPU and AI clusters.
"DARKNX and Accelsius: Redefining the thermal envelope to power the global AI economy."
Accelsius's first row-based Coolant Distribution Unit purpose-built for AI and HPC at scale. The MR250 delivers 250kW of liquid cooling per rack in flexible configurations, 1×250kW or 2×125kW, with industry-leading thermal resistance and zero ozone depletion potential.
Dielectric refrigerant flows through vaporators (cold plates) mounted directly to GPU and CPU hot spots. Heat from the chip causes the refrigerant to nucleate and vaporize.
The vapor, now carrying the chip's heat as latent energy, travels through insulated lines from the server rack to the row-based Coolant Distribution Unit (CDU).
Inside the NeuCool MR250 CDU, the refrigerant vapor condenses back into liquid, releasing its heat to facility cooling infrastructure via water-cooled doors, dry coolers, or other heat rejection methods.
The cooled liquid refrigerant returns to the vaporators at the chips, completing the closed loop. No water ever contacts the IT equipment, zero corrosion risk, zero leak contamination.
Unlike standard liquid cooling, which simply moves heat through a fluid, two-phase cooling utilizes the latent heat of vaporization. A specialized non-conductive dielectric refrigerant boils at the chip surface, absorbing exponentially more energy than water alone, enabling a new standard for data center efficiency and power density.
Our infrastructure is engineered to support megawatt-level rack densities, catering specifically to the extreme demands of hyperscalers and enterprise AI leaders.
By supporting rack densities of 250kW and beyond, up to multi-MW, we enable the consolidation of massive compute into a significantly smaller, more efficient footprint.
The NeuCool® system demonstrates industry-leading cooling capability of up to 4,500W per socket. By using a two-phase cooling process, the system leverages the latent heat of vaporization to remove significantly more heat from the chip than conventional liquid or air-cooling approaches.
Two-phase systems operate effectively with facility water temperatures up to 8°C higher than competing solutions, broadening conditions for chiller-less, free-cooling operation. PUE of 1.08 is achievable, with near-zero water consumption eliminating evaporative towers entirely.
From retrofitting existing spaces to massive greenfield developments, the Accelsius partnership allows DARKNX to deploy modular, future-proof cooling that scales as chip wattages continue to climb.
Across our diverse project landscape, the integration of Accelsius NeuCool® technology provides a standardized, high-performance cooling stack, consistent across every DARKNX deployment for simplified operations and maintenance.
Two-phase cooling allows DARKNX to consolidate massive compute power into a significantly smaller, more efficient footprint, unlocking new possibilities for urban edge deployments and dense hyperscale campuses.