January 26, 2026
Deep Green
When it comes to data centres, there’s no doubt that the US dominates. With 4165 data centres as of November 2025, the US has almost ten times more data centres than any other country in the world.
There’s clearly a demand for high-performance compute across the pond. But with grid constraints and local resistance putting strain on data centre builds, we also see an opportunity to change the narrative. By integrating heat reuse with compute, we can turn data centres into urban ecologies that benefit communities, reduce emissions, and make the most of every kilowatt the grid has to offer.
In a discussion with Mark Lee, Chief Executive Officer at Deep Green, we discussed the shift towards inference workloads, the potential of Northern US states, and how to succeed as an innovator in this field.
From your point of view, how would you characterise the current phase of the US data centre market? Are we still in an acceleration phase, or are we starting to see nuanced segmentation?
Talking about the US-specific data centre market, I’d say we’re on the cusp of entering a whole new phase. The last decade was all about hyperscale expansion and compute growth. The market’s still growing, with AI and HPC driving demand. However, distinct segments are now emerging, depending on workload type, latency sensitivity, power profile, and geographic requirements.
When you look at AI training, inference, and traditional HPC as distinct demand drivers, how do their infrastructure needs differ in the US today? Which do you expect to dominate net new capacity over the next three to five years?
When we talk about AI, a lot of people are referring to large-scale training clusters. This kind of AI training requires established hyperscale hubs in regions with proven grid capacity. But this kind of AI training is also vulnerable to grid congestion and community resistance, which we’ve seen are already slowing expansion.
As for inference workloads, they require a whole new kind of setup. Inference means running compute continuously to process new data. Low latency is essential: any kind of delay would affect the output of real-time executions. Because of this, compute is placed geographically close to the demand centre. The need for inference is growing fast and is expected to soon outpace AI training demand. So, in answer to your question: inference will dominate in three to five years, for sure.
Then, there’s HPC. Scale isn’t as crucial here: what’s important is proximity, reliability and integration with local industries.
Many headlines focus on hyperscale training clusters. But where do you see the most under-discussed growth in inference or HPC workloads geographically within North America?
The US has several data centre hubs: Northern Virginia, Dallas-Fort Worth, and Silicon Valley. But power availability is a huge bottleneck for these typical data centre locations. Utility companies don’t always want to approve new large loads. It’s becoming necessary to look beyond the standard data centre hubs. This brings us to the Northern US states, which have so-far been overlooked as ideal locations for inference workloads.
How attractive do you think these Northern regions are becoming for next-generation compute, compared to traditional hubs like Northern Virginia or Texas?
There are several advantages. Firstly, and most obviously, the temperatures are cooler. This means data centres can operate more efficiently and with less strain on cooling systems. There’s also established utility infrastructure. What’s more, this region is home to several large cities; building data centres close to cities means lower latency when it comes to inference workloads.
We’ve also got to remember that the North is a strong base for several industries that require HPC and inference workloads. Michigan, for example, has everything from advanced manufacturing to automotive innovation to academic research. This region has a distinct need for more data centres.
Power availability and grid constraints are now board-level issues. How are US operators thinking about efficiency, flexibility, and alternative design models as they plan future capacity?
When we talk about efficiency in data centres, we often look to Power Usage Effectiveness (PUE) as an indicator. But, especially as we’re seeing an increase in pressure on the grid, having a stellar PUE rating isn’t enough. There isn’t always enough power to go around. That’s why extracting as much value as we can from each megawatt of electricity is crucial when it comes to data centre design, for example, through heat reuse.
Heat has historically been treated as a waste product in US data centres. Do you see attitudes shifting toward heat as an asset, and if so, what’s driving that change?
Previously, we didn’t consider heat as an asset because we didn’t need to. There was an abundance of power, cheap energy, and less awareness of environmental impact. That’s changing: electricity prices are high, grids are congested, and there’s pressure to meet net-zero and ESG targets. Citizens are resisting new data centre builds because they’re questioning what these data centres add to their communities. With heat reuse in the conversation, all that can change. We’re learning lessons from the European data centre landscape, where heat reuse is more prevalent.
Which sectors do you think are most likely to value or monetise data centre heat reuse in the next decade?
So many sectors could benefit from reusing data centre heat. You’ve got industrial facilities, greenhouses and controlled-environment agriculture, universities, hospitals, public buildings, and municipal infrastructure. The list goes on. Especially for the Northern states during the winter months, the demand for heat is huge.
As AI inference and HPC move closer to end users and industrial demand, how important do you think proximity to heat off-takers and local communities will become in site-selection?
There’s definitely a shift in how sites are decided upon. Because inference and HPC workloads are becoming more decentralised, we have more flexibility over the physical location of compute. This means data centres can be built next to existing heat partners (offtakers who will use data centre heat), or in conjunction with these partners in new urban ecologies. And communities will benefit from nearby compute, instead of resisting it.
Imagine a company was designing a new data centre with long-term sustainability and social licence in mind. What’s essential for success in the US market?
In the US, we see so much local opposition to data centres. This can’t be overlooked. People believe that data centres put strain on the grid without benefiting them, and in some cases, this is true. So, if new data centre builds can integrate with communities and offer clear local value, this is a game-changer. Heat reuse models embed data centres within local ecosystems; this is a clear advantage in terms of securing approvals and ensuring long-term viability.
Deep Green’s US ambition
When it comes to data centres, the US still lives up to its reputation as the land of opportunity, especially in the Northern states. At Deep Green, we’ve proposed a Michigan-based data centre with heat reuse at its core. Check out our Lansing data centre vision to see how our American ambitions could come to life.




