AI is powerful, but is there enough power for its rapid expansion?
Editor's note:
The following article is the first of a two-part series examining how foundational infrastructure – particularly computing power and energy use – will shape the future of artificial intelligence.

As OpenAI prepares for the rollout of GPT-5, rumors suggest a cost-conscious shift in strategy underway in the artificial intelligence industry, with a focus on efficiency rather than sheer scale.
This move appears to be in response to the growing limitations of physical infrastructure and a need by AI competitors like China's DeepSeek to keep costs in check as demand for computing power accelerates at an unprecedented pace.
Investment in the industry has been heavy.
In 2024, tech giants Microsoft and OpenAI unveiled plans to collectively invest US$115 billion into supercomputing initiatives, with an eye toward the 2028 launch of their ambitious "Stargate" supercomputer.
Meanwhile, Elon Musk's xAI has made its mark with a bold vision: the construction of a "computing factory" linking 100,000 Nvidia H100 GPUs. The aim is to rival the computational firepower of Microsoft and OpenAI.
At the same time, Amazon, Google and Meta have significantly upped their spending, collectively pushed their 2024 capital expenditures to an estimated US$188 billion, nearly 40 percent higher than a year earlier.
Alarm bells sounding
Amid all this frenetic investment, Alibaba Chairman Joe Tsai has raised alarms about the risks of over-investment, warning that the AI data center boom could be approaching a "bubble" driven by an oversupply of infrastructure that may not align with actual demand.
These concerns are compounded by the environmental impact of sprawling data centers, which are significant contributors to global carbon emissions. As the AI sector grows, balancing infrastructure expansion with sustainability has never been more critical.
In the evolving landscape of artificial intelligence, cost efficiency is becoming a driving force. Firms like DeepSeek have led the charge, developing architectures that significantly reduce AI's operational costs.
Chinese companies, for instance, have reduced the costs of inference – or the ability of AI, after much training on curated data sets, to reason and draw conclusions – to as low as 5 US cents per million tokens, a move that has opened the door for broader AI accessibility. But as with many technological advancements, this newfound efficiency presents a paradox: Rather than limiting demand, it accelerates it.
The so-called "Jevons Paradox" is playing out in real-time within the AI sector. As the cost of running models like GPT3.5 has dropped to mere pennies, AI adoption has surged. This price reduction, coupled with growing investment in AI infrastructure, has created a feedback loop where lower costs directly lead to increased demand – ultimately driving even more investment into the sector.
Large corporations, with their deep pockets and established infrastructure, are best positioned to take advantage of this trend, securing a dominant stake in the AI ecosystem.
Strategic assets
Yet, the story is not just one of efficiency and profit. Nations are increasingly treating AI infrastructure as a strategic asset.
To counter the growing muscle of China, the US has introduced restrictions on exports of advanced semiconductor chips to China, while the European Union's AI Act aims to impose stringent ethical guidelines on AI deployment.
This geopolitical maneuvering highlights the urgency for countries like China to build self-reliance, with AI infrastructure part of a broader push for technological sovereignty. The rising stakes have transformed the global AI race into a high-level geopolitical chess match, where access to critical infrastructure is just as important as technological prowess.
China is rapidly positioning itself as a dominant force in artificial intelligence by making substantial investments in the critical infrastructure that underpins AI development. At the core of this strategy is the ambitious "Eastern Data, Western Computing" initiative, launched in early 2022.
This project seeks to leverage the country's abundant energy resources in western China to power the AI-driven future of eastern China's bustling coastal economic hubs.
By mid-2024, the initiative had already attracted over US$6.1 billion in investment, resulting in the establishment of eight major data centers across key regions like Inner Mongolia, Ningxia, Gansu and Guizhou. These data hubs are strategically located to capitalize on lower energy costs while providing the computational power needed to fuel China's rapidly expanding AI sector.
China's heavyweights in both the private and public sectors are embedding AI into their operations, driving constant demand for upgrades and enhancing infrastructure. The results are already becoming apparent: China's AI investments are not just shaping the tech sector; they are also driving broader economic growth.
Financial giants such as Industrial and Commercial Bank of China (ICBC) and China Construction Bank are integrating AI technologies to enhance services, including fraud detection and customer support.
ICBC, for example, utilizes its e-Security system to identify and intercept fraudulent transactions, effectively reducing telecom fraud risks. Sinopec, China's state-owned oil giant, is increasing its reliance on domestic AI infrastructure, sourcing a significant portion of its servers from local manufacturers.
This shift supports homegrown suppliers like Inspur and Sugon, which specialize in AI-accelerated hardware. It's noteworthy that both Inspur and Sugon have faced US export restrictions due to their involvement in developing supercomputers for alleged military applications.
The trend toward AI-powered infrastructure extends beyond traditional industries.
Consumer electronics giants like Xiaomi and Oppo are at the leading edge of AI development, enhancing applications for voice recognition and real-time image processing. By processing data locally, these technologies not only reduce latency but also enhance user privacy, underscoring the broader integration of AI into everyday devices.

Apple's China data center in Guizhou Province
Supply chains
Chinese technology giants are intensifying their efforts to build self-sustaining AI ecosystems, making significant investments across the supply chain – from chip development to cloud infrastructure. These moves signal China's ambition to reduce its reliance on Western technology and establish a dominant global presence in AI.
Huawei, a global provider of information and communications technology, is expanding its influence with Ascend 910 chips designed for large-scale model training. These chips power Huawei's Atlas AI computing systems, which are deployed globally, including in Europe.
Despite years of US sanctions, the company has managed to rebound with revenues approaching pre-sanction levels. It has diversified into key sectors, including smart-driving technology, software development and advanced chipmaking.
At the same time, Lenovo, bolstered by its acquisition of IBM's personal computer division, has emerged as a global leader in AI-optimized servers. The company now ranks among the world's top suppliers of supercomputers, strengthening China's competitive edge in AI infrastructure.
Chinese technology firms are aggressively extending their AI infrastructure capabilities beyond their borders, forging significant international partnerships that strengthen their global influence.
Alibaba Cloud, for one, is making significant strides in Southeast Asia. A standout example of this expansion is its collaboration with Indonesia's Gojek, a leading ride-hailing and delivery platform. By integrating Alibaba Cloud's server-less AI technology, Gojek has cut ride-hailing response times.
The server-less solution enables companies like Gojek to scale operations more efficiently. As demand for services grows, the architecture automatically allocates resources to manage the increased load. This benefit is essential for Internet companies to maintain a competitive advantage in the emerging market region.
Energy management
Chinese companies are adopting advanced cooling solutions to manage the increasing energy consumption and thermal demands of AI workloads.
Shandong Province-based Inspur is expanding its liquid-cooling operations to meet the surging demand for AI servers. Data center operators like Global Switch are implementing direct-to-chip liquid cooling systems in key locations such as Hong Kong to address the escalating power requirements of AI applications.
Alibaba's Qiandao Lake Data Center utilizes innovative cooling systems that tap lake water, solar energy and advanced technologies to achieve an annual average "power usage effectiveness" ratio below 1.3, with operational averages around 1.27. This approach significantly reduces energy consumption compared with traditional mechanical cooling methods.
The Chinese government has introduced stringent energy efficiency regulations for data centers to curb excessive electricity consumption, influencing procurement decisions for advanced chips and shaping the operational dynamics of AI data centers.
In 2023, Chinese data centers consumed approximately 150 billion kilowatt-hours of electricity, about 1.6 percent of all electricity usage. Projections suggested an increase to 380 billion kilowatt-hours annually by 2030.
To address this surge, policies aim to power new data centers with 80 percent green energy by 2025, a significant shift from the current reliance on coal-based power.
Multinational Chinese technology giant Tencent is leading by example. The company has launched a renewables-powered hybrid microgrid project at a data center in Hebei Province, combining wind, solar and battery energy storage to supply 10.54 million watts of power, generating 14 million kilowatt-hours annually.
The road ahead
As China pushes to dominate the global AI landscape, significant obstacles remain.
The most immediate challenge comes from US export controls on advanced chips, particularly the Nvidia H100 chips. The restrictions are disrupting vital supply chains for Chinese AI firms.
At the same time, Chinese AI companies are still grappling with funding shortfalls.
While Beijing-based Cambricon Technologies raised US$800 million in 2023 through a listing on the Shanghai STAR Market, there is growing recognition that sustained research and development and expansion will require further capital.
Beijing-based Horizon Robotics, another key player, raised US$696 million in October in Hong Kong's biggest initial public offering of 2024, underscoring the trend of Chinese firms seeking public funding to accelerate growth and innovation.
Energy consumption also remains a significant challenge. A single Nvidia H100 server, which powers some of the most sophisticated AI models, consumes a staggering 30 kilowatts of power – equivalent to the energy usage of 30 average homes. Such high demand is prompting Chinese data centers to explore liquid-cooling technologies and integrate renewable energy solutions to mitigate costs and reduce their carbon footprints.
These challenges highlight the intricate relationship between AI advancements and the infrastructure required to support them.
China is investing heavily in both computational power and energy-efficient solutions to achieve global domination in AI. However, with US sanctions, funding gaps and energy concerns, the road ahead remains uncertain.

The Zhejiang Cloud Computing Data Center
(The author is founder of WisePromise, a boutique advisory agency specializing in the international expansion of Chinese tech companies in the advanced hardware and energy sectors. He also serves as a geo-economic expert for several think tanks in Beijing.)
