Lyceum: Shaping the Future of Sovereign Compute Infrastructure
As AI models grow more powerful, so does the demand for compute infrastructure that is fast, sovereign, and deeply integrated.
From training AI models to running complex simulations or large-scale data pipelines, modern compute workloads demand not only cutting-edge GPUs but also infrastructure that can manage scheduling, orchestration, and hardware optimization at scale. Today’s cloud offerings often fall short when it comes to flexibility, transparency, and data sovereignty.
Lyceum, founded earlier this year by Magnus Grünewald (CEO) and Maximilian Niroomand (CTO) is building a sovereign GPU cloud platform from the ground up. Based in Berlin and Zurich, the company combines its own hardware with a user-friendly software layer that enables one-click deployment, automated hardware selection, and upfront pricing.
What’s especially impressive: Lyceum already has EU-sovereign NVIDIA Blackwell B200 GPUs deployed today, with B300s coming shortly, making them one of the first in Europe to offer this level of compute performance.
Just six months after founding, Lyceum raised a €10.3M pre-seed round led by redalpine, with participation from 10x Founders and others. The team is now preparing to launch its first liquid-cooled data center and expand its engineering and commercial operations.
Learn more about the future of GPU infrastructure in Europe from our interview with Lyceum co-founder and CEO, Magnus Grünewald:
Why Did You Start Lyceum?
Before founding Lyceum, I was at Enpal, where I co-founded the installation and fulfillment arm for heat pumps. That experience sparked my interest in energy and construction. With my background in computer science, I naturally started thinking about data centers. Around that time, about 1.5 to 2 years ago, I saw an interview with Mark Zuckerberg mentioning Meta’s need for a one gigawatt training node. Everyone was talking about the shortage of data center capacity, and I thought that with my energy and construction experience, I could actually build these. That’s why I began thinking about building next-generation data centers designed for the AI era, with high power density and liquid cooling.
Then I met my co-founder Max, a highly technical person and long-time power user of high-performance compute. He approached the space from the user perspective, while I came from the infrastructure side. We started thinking about how to combine both views and realized there was a lot of value in a vertically integrated approach. We spoke to many customers, deepened our understanding of the market, and ultimately set out to make compute access as simple and seamless as possible.
How is Lyceum Rethinking Access to GPU Infrastructure?
Our core vision is to become the most seamless compute provider in the market. We are building software on top of our own data centers, offering a fully integrated platform that makes high-performance compute easily accessible.
Today, using GPUs at scale is still extremely complex. From managing Kubernetes clusters to selecting the right hardware and estimating costs, there are countless steps users need to navigate. We aim to remove that friction and automate the entire pipeline, so customers can focus on building without worrying about the infrastructure behind it.
In terms of product, our goal is to let users run compute directly from their IDE, making it feel just like running code locally. With the Lyceum extension, they simply click, specify when they need the results, and we take care of the rest: choosing the right hardware, orchestrating the workload on our stack, and delivering the output. It’s designed to be intuitive, so access to compute feels immediate, not technical.
How Does Lyceum’s System Orchestrate and Optimize GPU Workloads End-To-End?
There are two main components: the software infrastructure and the hardware infrastructure. Both are essential to deliver the experience we promise.
On the orchestration side (software), we receive workloads via simple APIs (integrated in IDE extensions and SDKs), automatically analyze those workloads. Based on that, we select the most efficient hardware, for example, recognizing that a specific job will run best on a B200’s while others can be served with far less powerful machines. This optimises the cost efficiency of our customers’ AI infrastructure by preventing failed runs and overprovisioning. We then handle all orchestration: packaging the workload, deploying it on our infrastructure, running it, and sending back the results.
On the hardware side, we’re currently running on co-located prototypes while building out our own infrastructure. We take care of everything from server management and cooling to stable connectivity, ensuring the physical environment is optimized for high-performance computing.
Combining those two, we are able to offer a seamless front-end experience for our users, with minimal setup and management, while still guaranteeing the necessary performance.
Do You Operate Your Own Data Centers or Lease GPU Capacity From Partners?
There’s a short-term and a long-term perspective. We’re a very young company, so building our own data center isn’t feasible yet, as much as we’d love that to be the case.
In the short term, we’re renting space in a liquid-cooled data center in Paris and deploying our own GPUs there. The setup is fully EU sovereign, and we plan to go live with our first B200 GPUs in about 1.5 months.
Long term, we’re working on larger deals to build our own infrastructure. That will give us full control over the environment, including cooling and power, and allow us to future-proof our operations.
How Do You Balance Infrastructure Sovereignty With the Need for Scalability and Operational Flexibility?
It’s a super important question. We’re strong believers in being vertically integrated because it allows us to deliver the most seamless product to users and avoid paying margins to other players. So in theory, being vertically integrated and doing everything is kind of best. But as we’re just starting out, we really need to focus on specific aspects.
Building out our own capacity, while still having other players handle the operational side because they know what they’re doing, allows us to focus on what matters most to us, which is the software side of things.
What GPU Architectures Does Lyceum Support and Which Workloads Is It Optimized For?
We currently offer B200 GPUs hosted in our partner’s data center, with B300s coming online soon. To support a broader range of workloads, we maintain strong relationships with other providers to offer access to H200, H100, L40S, and select AMD models.
Our software stack is designed to work seamlessly across most architectures in our resource pool. This flexibility allows our orchestration platform to dynamically provision the most suitable chips for each workload. Whether for model training, fine-tuning, or inference. As we expand our own data center infrastructure, our platform will support runs across several hundred GPU nodes.
What’s the Biggest Technical Constraint Right Now?
At the core, the biggest technical challenge we’re facing right now is predicting the runtime of a job accurately. We’re working hard on this because it unlocks major advantages in scheduling and in selecting the optimal hardware for each workload.
If we can rapidly analyze and deeply understand the job a customer submits, we can significantly improve the overall performance and experience of the platform. That’s really where the core challenge lies. GPU availability, routing, and things like that are no longer bottlenecks.
What’s the Milestone That Would Validate Your Model at Scale?
The biggest milestone we’re currently working toward is getting the data center we’re co-developing fully up and running and staffed. It will be a relatively small deployment, around two megawatts of power, but a major validation step for us. We’re aiming to have it live with our own GPUs by mid 2026, so in about 11 months.
Once that’s operational, we’ll have proven that we can deploy, manage, and orchestrate large-scale GPU infrastructure at production level. It’s the key step that validates our model at scale.
What Advice Would You Give to Fellow Deep Tech Founders?
Being aggressive in fundraising and ambitious in your vision is incredibly important. Especially in Europe, there’s often a tendency to over-test, to stay cautious, and to get lost in small iterations. People can become afraid of scale.
The biggest advice I’d give is: be bold. Take risks, take action, and really commit to what your idea could become. Then do your best to make it happen. There are so many great people in the ecosystem, like our investors, who are willing to back strong visions. I believe every founder can find the right counterpart if they lead with ambition and clarity.
