ZeroPoint Technologies: Shaping the Future of Ultra-Fast, Hardware-Based Data Compression
Data centers are the backbone of our digital world. As the demand for compute skyrockets, so does the need to store and retrieve data efficiently, not only for streaming high-definition videos but also for training the latest AI models.
Currently, data centers consume about 2% of global electricity, with projections suggesting this could be 5x by 2030. Despite the push for renewables, managing the rising energy costs remains a challenge. But what if you could remove up to 70% of unnecessary data and thereby boost performance per watt by up to 50%?
ZeroPoint Technologies pioneers hardware-based data compression that can increase memory capacity by 2-4x and reduce the total cost of ownership for servers by up to 25%. Founded in 2016 by Per Stenström and Angelos Arelakis, it recently closed a €5.0M Series A, led by Matterwave Ventures, and joined by Industrifonden, Climentum Capital, and Chalmers Ventures. They also went through the Intel Ignite startup program.
Learn more about the future of ultra-fast, hardware-based data compression from our interview with the CEO, Klas Moreau:
Why Did You Join ZeroPoint Technologies?
I have always been interested in the semiconductor industry, starting out as an engineer in the 1990s working on chips for Ericsson. After that, I worked for a couple of other companies as a CEO or board member until the opportunity opened up to join ZeroPoint Technologies in 2019 and return to the semiconductor industry. Scientists had founded the company, but now they were looking for someone to take it to the next level.
I was very impressed not only by their real-time data compression technology but also by their world-class team. It seemed like an opportunity to make a significant impact in the semiconductor industry, address a hair-on-fire problem, and unlock huge performance gains, so I took the leap and joined ZeroPoint Technologies.
How Does Your Compression Technology Work?
Every computer has memory, but no one ever said they have too much memory. It’s always too few. Research showed that up to 70% of information stored in memory is redundant and thus unnecessary. We have found a fast compression technology to store only necessary information, thus enabling much greater memory capacity as well as speed, bandwidth, and energy efficiency.
We develop hardware IP that blends into the compute architectures of our customers’ board designs. It’s delivered as software, but once they compile it to produce printed circuit boards for their data center servers, it becomes hardware. These hardware blocks act as a memory controller and can work with all different kinds of memory, such as the local Cache and SRAM, increasing their capacity and enabling faster and higher-bandwidth access to memory.
First, we developed IP to compress level 2 and level 3 SRAM cache, doubling its effective capacity. We’ve then shown with SuperRAM that we can implement a hardware accelerator that can efficiently compress data also for a computer’s main memory. Finally, we use Compute Express Link (CXL), a new standard designed to work on top of the existing PCIe infrastructure, to connect memory modules directly to the CPU or more specialized processors, such as GPUs or FPGAs, allowing for high-speed, high-bandwidth and low-latency data transfers.
What Trends Do You See in Computer Design Today?
Instead of developing large systems-on-chip and monolithic architectures, engineers increasingly develop chiplets and combine them into packages that allow scaling performance more modularly and combine different components more easily.
Systems architects are increasingly using hardware accelerators to solve specific computing problems rather than relying on general-purpose processors. They were using GPUs but also more specialized processors for digital signal processing and data input and output because they work significantly better for those specific tasks than CPUs. GPUs have now seen wide adoption with the recent AI boom, which is part of an overall trend in the semiconductor industry toward heterogeneous computing.
Also, the memory hierarchy pyramid is getting more layers, which allows fetching and storing data more efficiently. Currently, memory, not processors, is often the bottleneck—enabling faster memory thus also unlocks better utilization of processors. Using compression, we can leverage compute resources much more efficiently.
With the advent of AI, the demand for high-quality data has become insatiable, and we’re seeing a lot of demand to compress this data and store and access it more efficiently. If data is stored in the memory hierarchy more cleverly, also data fetching can be more efficient, thus improving AI performance systematically. This is a tailwind for our hardware-accelerated compression technology.
How Did You Evaluate Your Startup Idea?
We have verified our technology on a TSMC 5 nm node and demonstrated that it works today. Right now, we’re working with customers on different levels of the memory hierarchy to map out the value we can provide them. Doubling the cache or making on-chip memory much faster is a huge deal.
We don’t compete with memory manufacturers, as we build an IP that connects to the memory controller rather than a memory. Instead, we help them improve their memory and make it more energy-efficient. And who wouldn’t want a more performant and secure memory?
What Advice Would You Give Fellow Deep Tech Founders?
Things always take longer than you think. You hear this often, but it is so true. Still, it’s good to be optimistic because otherwise, you wouldn’t go through all the steps needed to create customer value with your product. You need to be patient, spend a lot of time learning about your customers, and find a good way to package your offering, especially if you’re selling something very technical and complicated.
In Europe, founders rarely start semiconductor companies, and one reason is that they’re concerned about timelines. But there are also ways to mitigate this, e.g., by going fabless or by providing soft IP. Today, there is a huge opportunity in semiconductor hardware—software alone won’t be able to make a 10x difference, but combining hardware and software can unlock a step-sized gain in performance. That’s why you shouldn’t be afraid to work on semiconductor hardware.