Society, grid congestion, and limited physical space are constraining data center expansion on Earth. Consequently, various tech giants and startups are seeking success in a new domain: space. Could IT infrastructure, and even the rise of AI, eventually migrate beyond the atmosphere?
Amazon founder Jeff Bezos predicted in October that gigawatt-scale, solar-powered data centers would be a reality within 10 to 20 years. At the World Economic Forum in Davos this week, Elon Musk was even more ambitious: AI data centers in the “final frontier” could be viable within two to three years. Google also plans to deploy its TPU chips in orbit via Project Suncatcher. Meanwhile, startups like Starcloud and Planet are actively working on space-based computing power. Beyond the speculation of tech CEOs, engineering teams are already working to realize this innovation. Despite significant challenges that make Musk’s timeline appear less realistic than Bezos’, the rationale for this shift is strong.
Fertile ground
The conceptual leap required to put server racks in space is smaller than it appears. There is already the precedent of the International Space Station (ISS). Since 2021, the HPE Spaceborne Computer (SBC-2) has operated on the station, processing data and effectively uploading results to Earth. Other control mechanisms still largely run on legacy Intel Pentium or 386 systems, proving that chips can withstand decades of radiation. Furthermore, the Starlink constellation demonstrates that high-speed internet connectivity can translate reliably to the vacuum of space. The step toward more advanced hardware is not insurmountable.
Two further advantages are economic. First is the cost of launch. Travis Beals, Senior Director of Paradigms of Intelligence at Google, states that launching a kilogram into space could cost less than $200 sometime in the 2030s. SpaceX currently charges a few thousand dollars to transport one kilogram to Low Earth Orbit (LEO), the intended destination for this hardware. This is a massive reduction compared to the late 20th century, when the Space Shuttle program cost tens of thousands of dollars for the same payload.
The primary incentive for launching IT hardware is to place it in a polar orbit. Hundreds of kilometers above the surface, equipment in this configuration can access continuous solar energy. This offers a dual efficiency gain: solar cells in a vacuum receive direct sunlight without atmospheric absorption or weather interference. While these advantages are moot if launch costs remain astronomical, the gradual decline in pricing is bringing the break-even point closer.
Back to Earth
While the advantages are compelling, the potential disadvantages are stark. First, the physical equipment is beyond the reach of the standard technicians required to keep a data center running. Second, the risk of launch failure remains significant; a truck or container ship filled with AI chips does not share the same volatility as a rocket. While this risk is decreasing, the fact remains that the hardware must operate in a hostile environment. Constant radiation, extreme thermal cycling, and a vacuum that prevents convective heat dissipation are just a few of the engineering hurdles.
Another challenge lies in the fundamental design of space-grade IT hardware. This constraint invites creativity. Consider the satellite constellation envisioned by Travis Beals and his team at Google. In this model, nodes communicate via optical (laser) signals. Within the 100-200 meters separating the satellites, this light-speed communication is virtually instantaneous. Furthermore, as Starlink has proven, the latency of the connection to Earth can now be measured in mere tens of milliseconds.
It might be assumed that the cold of space simplifies cooling, but the opposite is true. Because a vacuum lacks air, heat cannot be removed via convection; it must be dissipated entirely through radiation. Today’s AI chips often require over a kilowatt of power. This energy converts to heat, which must be rejected to maintain semiconductor performance (typically keeping junctions well below 100 degrees Celsius). Radiative cooling is inefficient compared to terrestrial air or water cooling and requires large surface areas. While liquid cooling is an option, it introduces points of failure, requires maintenance that is impossible in orbit, and adds significant launch mass, weakening the economic proposition.
Conclusion: a promising future
The future of the data center is not exclusively terrestrial. This trajectory seems inevitable given that the leaders of SpaceX (Elon Musk) and Blue Origin (Jeff Bezos) run massive technology companies alongside their aerospace ventures. Like Google, they will focus on innovative solutions to test the harsh realities of space.
Currently, building a large-scale AI computing cluster in orbit is too expensive to be a sensible investment. Additionally, the rapid obsolescence of hardware means advanced chips are superseded within two years. The ideal workload for space-based AI is also an open question: which specific configurations justify the cost of operation in orbit? Once these economic and logistical puzzles are solved, a gradual expansion of space-based computing is logical. It may not immediately resolve pressure on the energy grid or land use, but it offers a vital avenue for future growth.
Also read: Musk and Bezos start race for data centers in space