Starcloud: Shaping the Future of Space-Based Data Centers

As demand for compute continues to soar, energy usage and infrastructure constraints are emerging as critical bottlenecks. Terrestrial data centers face mounting challenges around power availability, cooling efficiency, and permitting. These problems worsen as models grow larger and denser.

Space offers a radically different environment. With uninterrupted solar energy, natural cooling by the vacuum of space, and no terrestrial permitting constraints, orbital infrastructure is emerging as a viable solution for next-generation computing. Falling launch costs make it feasible. 

Starcloud was founded in 2024 by Philip Johnston (CEO), Ezra Feilden (CTO), and Adi Oltean (Chief Engineer) to explore that frontier. The company is building GPU-powered data centers in space, starting with edge computing for satellite data and expanding toward a sovereign cloud platform designed to scale beyond Earth.

What sets Starcloud apart is its focus on solving the most challenging engineering bottlenecks of orbital computing, backed by a team with direct mission experience. Johnston is a second-time founder with a background in applied physics and national security. He holds an MBA from Wharton and an MPA from Harvard. Feilden previously worked at Airbus Defense & Space and Oxford Space Systems, where he contributed to multiple missions including NASA’s Lunar Pathfinder. Oltean spent 20 years as a Principal Software Engineer at Microsoft building large GPU clusters, and later joined SpaceX, where he led the Starlink tracking beam program.

In one of the largest-ever seed rounds after Y Combinator, Starcloud raised $21 million between December 2024 ($11 million) and February 2025 ($10 million). The round was backed by NFX, In-Q-Tel, the NVIDIA Inception Program, 468 Capital, Soma Capital, and scout funds from a16z and Sequoia, among others. 

Learn more about the future of space-based data centers from our interview with Starcloud co-founder and CEO, Philip Johnston:

What inspired You To Start Starcloud?

I began my career in high-frequency trading after completing a master’s degree in applied mathematics and theoretical physics. Later, I earned an MBA from Wharton and an MPA from Harvard, then worked at McKinsey with the space agencies of the UAE and Saudi Arabia. After founding and running an e-commerce startup for three years, I moved on to what eventually became Starcloud.

The motivation came from two places. First, I have always had a deep personal interest in space. Second, during my time at McKinsey, I witnessed firsthand how rapidly launch costs were decreasing. That impression was reinforced when I visited Starbase in Texas in mid-2023, just before the first launch. The scale and ambition of what was being built there convinced me. That realization led me to think seriously about new applications. 

Space-based solar was one obvious idea: massive solar panels in orbit, beaming power back to Earth. But instead of focusing on beaming power down to feed terrestrial data centers, we asked a different question: what if we moved the data center to space? We ran the launch-cost calculations and drafted a white paper, which became the genesis of Starcloud.

Editor’s note: Beaming power down refers to the concept of space-based solar power, where energy is collected by satellites equipped with solar panels and transmitted wirelessly (typically as microwaves or lasers) back to Earth, where it is converted into electricity.

How Is Starcloud Redefining the Future of Data Centers?

Starcloud is building data centers in space, made possible by the rapid decline in launch costs. Orbital data centers can tap into uninterrupted solar energy, use radiative cooling to scale to gigawatt levels, and thereby avoid many of the power, cooling, and permitting constraints that limit terrestrial infrastructure.

Editor’s note: Radiative cooling refers to the process of dissipating heat by emitting infrared radiation. In the vacuum of space, where there is no medium like air or liquid to transfer heat away, this becomes the only effective method of thermal regulation for data center hardware.

Our first step is to provide cloud computing services to other satellites and over time, we aim to compete directly with terrestrial data centers on energy costs. The long-term vision is that within about ten years, most new data centers will be built in orbit.

Our first commercial satellite, Starcloud-2, is scheduled to launch in 2026. It features a GPU cluster, persistent storage, continuous access, and proprietary thermal and power systems, all within a SmallSat form factor. Operating in a sun-synchronous orbit, it is designed to serve both in-space and terrestrial users.

Editor’s note: Persistent storage in this context refers to on-orbit data storage that remains available even when the satellite is not actively transmitting, similar to how data is stored and retrieved in terrestrial data centers.

Editor’s note: Continuous access means the ability for customers to connect to the satellite’s compute and storage resources at any time, made possible through high-bandwidth links to satellite constellations such as Starlink, rather than being limited to brief communication windows when the satellite is directly overhead.

Editor’s note: In this context, thermal and power systems are custom-designed technologies for managing heat dissipation and energy supply in space, enabling electronics to function reliably in orbit like terrestrial data center infrastructure.

For in-space customers, Starcloud-2 will enable real-time analysis of the terabytes of raw data generated daily by spacecraft and space stations. By processing this data in orbit, we eliminate downlink bottlenecks and deliver low-latency insights directly from Earth observation satellites.

Editor’s note: Downlink bottlenecks refer to limitations in sending data from satellites to Earth, often due to narrow bandwidth and limited transmission windows. These constraints can delay access to critical data or restrict the volume of information transmitted.

Editor’s note: Latency refers to the time delay between a request for data or a computation and the moment the result is received. In satellite systems, this often includes delays from transmitting data between space and Earth, processing it, and sending back the output.

For terrestrial customers, Starcloud-2 will eventually offer secure global data storage and sovereign cloud computing that operates fully independent of Earth-based infrastructure. This provides a highly redundant environment for critical data backup as well as premium high-performance computing.

Editor’s note: Redundancy refers to the duplication of critical systems or data so that if one component fails, another can take over without loss of service. 

Are There Technological Differences Between Terrestrial and Orbital Data Centers in Terms of Compute Architecture?

From the customer’s perspective, the experience will be largely the same. The types of chips used, how they are networked, and how data centers connect to the ground all resemble terrestrial cloud services. 

The differences lie in the supporting infrastructure. First, the hardware must survive launch, requiring the entire system to withstand significant vibration and mechanical stress. Second, in orbit, the chips must function in a high-radiation environment, which presents challenges for system reliability. Third, thermal management becomes more complex, as the absence of convection and conduction demands new approaches for dissipating heat into space.

Editor’s note: Convection is heat transfer through the movement of fluids such as air or liquid. On Earth, most cooling systems rely on convection, like airflow over server racks.

Editor’s note: Conduction is heat transfer through direct contact between materials, such as heat moving from a hot chip into a heat sink.

How Is Starcloud Addressing the Radiation and Thermal Challenges of Operating Orbital Data Centers?

With respect to radiation: outside Earth’s protective atmosphere, electronics are exposed to cosmic rays and charged particles that can flip bits, degrade performance, or even permanently damage hardware. Our approach to solving this combines shielding with software-level adaptations to build fault tolerance, enabling reliable operation in harsh environments.

Editor’s note: Flip bits refers to radiation-induced errors where a 0 in computer memory is accidentally changed to a 1, or vice versa. These single-event upsets can corrupt data or cause systems to malfunction if not corrected.

With respect to thermal management: high-performance compute generates significant heat, and in the vacuum of space, there is no natural way to dissipate it. Typically, you pump fluid directly across the surface of a chip, but this only cools the chip itself and misses most of the ancillary components on the motherboard and power systems. Tiny transistors, resistors, and other elements that stay cool enough on Earth can overheat in space. To solve this, we immerse the entire board in a thermally conductive liquid, such as oil or wax, so that every component is cooled evenly. We then radiate the excess energy away as infrared light using large, lightweight, deployable radiators designed for scale. Developing deployable radiators that are both large enough to handle these loads and light enough to launch cost-effectively is a complex engineering problem that we will solve.

Editor’s note: Ancillary components are supporting parts of a system, such as connectors, power regulators, or smaller chips, that are not the main processor but are still essential for reliable operation.

Editor’s note: Motherboard is the main circuit board that holds and connects all components of a computer system, including the processor, memory, and power circuits.

Editor’s note: Transistors are tiny switches that control the flow of electrical signals in a circuit, while resistors are components that limit or regulate the flow of current.

Editor’s note: Radiators in space data centers are deployable panels that emit excess heat as infrared radiation. They work together with liquid cooling to keep hardware at safe operating temperatures in the vacuum of space.

How Does Your Technology Handle the Latency and Bandwidth Limitations Inherent in Orbital Computing?

We connect directly to the backbone of major satellite constellations, particularly Starlink. Each Starcloud satellite is equipped with a laser terminal that provides extremely high throughput, up to hundreds of gigabits per second, and latency as low as 50 milliseconds. This combination of high bandwidth and low latency enables seamless integration with existing cloud workflows.

Editor’s note: In this context, a laser terminal is an optical communication device on a satellite that uses laser beams to transmit data at very high speeds between satellites or directly to ground stations.

Editor’s note: Throughput refers to the actual rate of successful data transfer over a network, indicating the amount of information that moves in practice within a given time. Unlike bandwidth, it reflects real-world performance rather than maximum capacity.

Editor’s note: Bandwidth is the maximum capacity of a network connection, representing the upper limit of how much data can be transmitted per second. Unlike throughput, it does not account for losses or inefficiencies in the system.

Which Types of Computing Workloads Are Best Suited for Your Infrastructure?

In the near term, our focus is on workloads from other satellites. These are typically inference tasks on imagery, such as processing sensor, hyperspectral, or radar data. Instead of downlinking terabytes of raw data, we perform the analysis in orbit and transmit only the insights. This is especially valuable for defense and government customers who require rapid, low-latency access to actionable information.

In the long term, our platform should support the same workloads as terrestrial data centers. With high bandwidth and low latency, we expect to handle real-time applications, large-scale inference, and training on space-based GPU clusters just as on Earth.

Editor’s note: Inference is the process of running a trained AI model to generate outputs or predictions from new data. It represents the application phase that follows training.

How Modular Is Your Orbital Compute Platform?

Right now it is not modular, but in the long term it will be. For data centers of approximately 40 megawatts, there is little benefit to modularity, as we can simply scale by increasing the size of a single satellite.

Modularity becomes crucial once we reach the full payload capacity of a Starship, which is around 100 tons. At that point, we expect to be able to deploy a 40-megawatt data center in orbit and beyond that scale, the platform will need to evolve into a modular architecture.

Editor’s note: Modularity in a data center refers to building it from separate units, such as compute, storage, or cooling, which can be added or replaced independently. This makes scaling and adapting easier than with a single monolithic system.

Do You Foresee Specialized GPU or ASIC Hardware Emerging Specifically for Orbital Computing?

Eventually, yes, but not in the near term. For the next decade, we plan to rely on standard terrestrial chips. Our entire architecture is designed to make off-the-shelf hardware viable in orbit.

If space-based compute were to grow into a significant share of the global market, for example 25 % of total demand, then companies like NVIDIA might have the incentive to design chips optimized explicitly for the orbital environment. Until then, our focus is on adapting today’s terrestrial GPUs to work reliably in space.

What Is the Step-Function Milestone That Would Validate Your Model at Scale?

Our first launch is only a few months away. That mission is designed to prove that our thermal management and radiation shielding techniques allow terrestrial data center–grade GPUs to operate in orbit. We will be the first to fly an NVIDIA H100 to space and plan to demonstrate several capabilities: training a model in orbit, performing fine-tuning, and running high-powered inference directly in space. Those demonstrations will mark a critical technical milestone.

The second satellite, scheduled for launch in October next year, represents our first commercial offering. It will provide edge workloads for other satellites and is designed to generate more revenue than the cost of building and launching it. Achieving that would be clear proof of commercial viability.

Editor’s note: H100 is a high-performance GPU from NVIDIA’s Hopper architecture, widely used in data centers for training and running large AI models. It is considered the leading chip for state-of-the-art AI workloads today.

Editor’s note: Edge workloads are computing tasks processed near where the data is generated, such as on satellites or sensors, rather than in a distant central data center. This reduces latency and makes systems more responsive in real time.

What Advice Would You Give to Fellow Deep Tech Founders?

The rarest and most valuable resource is not ideas but exceptional technical talent. In the early days, that should be your North Star. If you can attract the very best people in the world to work on your vision, good things will happen. If you cannot, progress will be extremely difficult. More than anything else, assembling that caliber of talent is what determines success.