8 Trends Shaping the Future of Computing in 2023

The future of computing is here, and I’ve been talking to the people making it happen. 

Since the spring of 2020, I have interviewed more than 60 startup founders about their journey and technology, and I have had the privilege of learning firsthand from these trailblazers.

Here are eight major trends shaping the future of computing today. (Click on the logos below each section to learn more about the startups leading the way for each trend.)

1. The Age of AI Has Begun

Before ChatGPT, you could get fired for implementing machine learning in your company; now, you will get fired for not implementing machine learning. 

Since its launch in late 2022, ChatGPT has demonstrated the impressive abilities of machine learning, including how it will lead to a second wave of automation—automating huge parts of the intellectual work currently done by humans, from writing poems and marketing ads to computer programs.

It changes our approach to problem-solving, as Andrej Karpathy has pointed out in his article on the new software 2.0 paradigm. Instead of writing code to solve a problem directly, it becomes about collecting data to train a machine learning model and thereby teaching a computer to solve the problem, e.g., to distinguish cats from dogs.

This opens up entirely new markets, both for the development of custom silicon chips (see “The Unbundling of Semiconductors” below) and for the emergence of an entire ecosystem of machine learning tools and platforms—from building vector databases particularly suited for machine learning, managing the quality of training data, and collaboratively developing and sourcing machine learning models to compressing models and monitoring their behavior after deployment and retraining when necessary. 

Despite the hype around AutoGPTs lately, we’re still pretty far from genuine artificial general intelligence. However, models don’t have to be sentient to be useful. While large language models have become more generally capable with increased size, finetuning those models allows for creating smaller and better models for solving very specific tasks. 

2. From Real-Word to Synthetic Data

Machine learning models require vast amounts of data for training. 

One way to get data is by scraping large amounts of text from the internet. For some use cases, this may just be too costly to be worth it. But for others, such as recognizing products on supermarket shelves, it is also highly impractical to collect enough real-world data for training.

A solution to this problem is synthetic data—artificial data created by computers with the sole purpose of training machine learning models. The challenge, however, is generating synthetic data that is realistic and representative of the real world—and needing as little human input as possible in the process.

There are different ways to generate synthetic data. One common approach is using game engines—computer programs intended for creating video games—to generate realistic 3D scenes that can be used to train machine learning models.

Another approach is using generative machine learning models that generate new data instances themselves. Currently, generative models are good enough to impress humans by writing funny poems or visualizing images of fantasy worlds. They’re not yet good enough to train other models, e.g., for computer vision tasks, but will eventually get there.

As synthetic data becomes more realistic, reliable, and affordable, they will enable a wide range of use cases, including autonomous driving, medical diagnosis, and fraud detection.

3. Bits Modeling Atoms

Machine learning doesn’t stop with teaching computers to produce natural language and recognize objects. It also makes scientific simulations faster, more accurate, and more accessible, impacting how we develop real-world products.

Complex scientific simulations like density functional theory (DFT) simulations to design materials or computational fluid dynamics (CFD) simulations to design turbines can easily take days or even weeks and months to complete on current-day supercomputers. However, replacing an equation with the prediction of a machine learning model can drastically speed up results from days to literally milliseconds. 

One approach is to use machine learning models end-to-end, i.e., replace the entire simulation with a machine learning model trained on existing simulation results. Another way is to approximate certain parts of a simulation by machine learning, e.g., the functional in a DFT simulation or the gradients in a CFD simulation. This can still significantly speed up the simulation without sacrificing accuracy.

Machine learning can also create better user interfaces that empower users to run simulations without expertise in the underlying mathematics and help scientists and engineers quickly test hypotheses.

Better simulations will allow us to upgrade our physical world, e.g., develop drugs with better specificity, batteries with higher energy density, or more efficient turbines. While moving simulations to the cloud improved scalability, a genuine 10x leap will come from machine learning—and possibly quantum computing, once it’s developed further.

4. Quantum Hardware Remains Hard

Speaking of quantum computing, building quantum computers that outperform classical supercomputers has proven to be notoriously difficult in the past decades. 

Quantum computing has raised significant interest thanks to its potential to leverage quantum effects to perform certain calculations much faster than classical supercomputers. However, despite the hype and billions of investments, no quantum computer has demonstrated an advantage over classical supercomputers for solving practical problems.

Part of the challenge is that current quantum computers are too “noisy,” leading to too many errors that cannot yet be corrected and preventing them from leveraging quantum effects to have an advantage. 

The other part of the challenge is that without such a quantum advantage, it’s tough to compete with classical supercomputers that steadily increase their computing power – thanks to parallelization and miniaturization which lets the number of transistors in a dense integrated circuit double about every two years, as observed by Moore’s Law.

Every year, a handful of new quantum hardware startups explore different technology platforms, from superconducting qubits, ion traps, photonics, neutral atoms to silicon spin qubits, to build a universal, fault-tolerant quantum computer. Others have focused on quantum business advantage and building a quantum processing unit (QPU) that outperforms CPUs for certain tasks, like a GPU. 

5. Smart Algorithms Not Just for Quantum Computing

After three decades of research, only three quantum algorithms have been mathematically proven to run exponentially faster on a fault-tolerant quantum computer than their classical counterparts. And developing new quantum algorithms with such a built-in quantum advantage is incredibly hard.

Most quantum algorithms developed and deployed today, such as variational quantum algorithms or quantum neural networks, are hybrid quantum-classical algorithms: one part of the algorithm runs on a supercomputer, while the other part, ideally the really hard part, runs on a quantum computer. Depending on the maturation of quantum computers, i.e., decreasing error rates, more parts of the algorithm can be performed on a quantum computer. 

It’s a bit like neural networks in the early days: there is no mathematical proof that they work, but in some cases, they turned out to work amazingly well. In the same vein, there is no mathematical proof that hybrid quantum-classical algorithms can outperform their classical counterparts consistently. One could argue that it’s mostly smart people doing smart things: if you’d spend enough brain power, you could optimize almost any algorithm.

Quantum algorithms will most likely impact quantum chemistry and material development, helping to simulate other quantum systems such as atoms, molecules, or crystals. And similar to machine learning, a nascent ecosystem of tools will emerge, e.g., for quantum algorithm compression, quantum error correction, and platforms to exchange quantum algorithms or deliver them through a “quantum cloud.”

6. The Future of Optical Computing is Bright

While quantum computing is still getting most of the hype, a new generation of optical computing startups has set out to enter data centers. 

The first wave focused on high-value use cases, e.g., building optical accelerators to perform matrix-vector multiplications quickly and efficiently for machine learning. However, a lot of the speed and efficiency gains were eaten up by the losses from integrating with existing electronic systems and the conversion between electronic and photonic signals. Most of the computing for machine learning remained with electronic chips.

The second wave of optical computing startups is just in the making, taking on the entire data center. From building infrastructure like new multi-color lasers to moving every single component like digital signal processors (DSP) to the optical domain, their goal is to avoid the limitations of electronics altogether. 

The ultimate challenge will be to build optical logic and memory chips, which would have the potential to restart Moore’s law for optical transistors. This would fundamentally change the way data is processed and stored and could unlock the terahertz age of computing: 1000x faster optical chips that would use multiple laser colors to compute in parallel while using just a fraction of the energy of an equivalent electronic chip. 

In the meantime, more energy-efficient transceivers, e.g., based on graphene, make converting electronic to photonic signals back and forth more effortless, which shortens the range over which optical data communications are economical. While fiber has become the standard for transmitting data over hundreds of kilometers between data centers, highly efficient transceivers and optical interposers will also enable optical data transmission within data centers and between chiplets.

7. The Unbundling of Semiconductors 

For decades, the chip industry has focused on developing general-purpose chips, CPUs, and increasing their computing power by placing more transistors on a single chip, as observed by Moore’s law. However, it gets increasingly difficult to miniaturize transistors and improve a chip’s performance as chip design is pushing physical limits at the nanoscale. 

In the mid-2000s, computers started improving their computing power mainly through parallelization, adding more cores to a chip and more chips to a processor. In addition, graphical processing units (GPUs) started to take the lead not only for graphics but also for machine learning and scientific computing applications that highly benefit from parallelization. 

Machine learning has become extremely important, unlocking enormous amounts of value through the automation of cognitive work in various industries. Yet, machine learning doesn’t require general-purpose chips but chips that can perform very specific computational operations quickly and efficiently.  

As a result, we’re seeing the unbundling of semiconductors. More and more chips will be optimized for specific applications, mostly around machine learning, rather than being general-purpose. This happens on different levels of abstraction. 

First and foremost is the design of application-specific integrated circuits (ASICs) specifically for machine learning. Their design is even more customized and thus more efficient than GPUs, yet they still juggle zeros and ones to perform machine learning computations. Several startups set out to build AI chips, but blue chip giants have also built their own ASICs in-house, e.g., Google’s tensor processing unit (TPU) or Apple’s neural processing unit (NPU). 

On another abstraction level, startups build actual neuromorphic chips designed to mimic the way the human brain works. Using in-memory computing, thus avoiding the von-Neumann bottleneck, and encoding neural networks by hardware design, could bring even more significant improvements than ASICs for training and inference. (While ASICs thus far haven’t dethroned GPUs for machine learning, maybe neuromorphic chips eventually will.) 

Finally, startups also explore fundamentally new physics principles for computing. Optical computing is one example, but other examples include biological processing units, using actual neurons to build a brain in a Petri dish, or chips for spiking neural networks that even more closely resemble how the human brain works. 

The unbundling of semiconductors will lead to new generations of faster, more efficient, and more specialized chips. And it’s not only chips that become more customized but also electronic boards, as machine learning helps with component selection and finding more optimal, customized designs.

8. Clouds Move Faster

As more and more applications run in the cloud, the cloud is becoming faster and more optimized. This is achieved not only because custom chips enter the data centers but also because smart computer scientists continue developing smart algorithms, e.g., to optimize the performance of databases. Faster databases can access and process data more quickly, enabling real-time applications, e.g., multi-player games, fraud detection, and financial trading.

Another area of optimization is data transfer. Cloud providers are developing more efficient pipelines for transferring data from the edge to the cloud. This is essential for applications that collect data from sensors, such as self-driving cars and industrial IoT devices.

Overall, the cloud is becoming more collaborative. Cloud providers are developing tools that make it easier for developers to collaborate on code and scientific computing projects, allowing organizations to develop and deploy new applications quickly. This enables them to innovate faster and deliver better experiences to their customers.

Bottom Line

There is no end in sight to our pursuit of technological advancement in computing. With every efficiency gain, rebound effects drive us to seek even more computing power, pushing the boundaries of what is possible.

But with great computing power comes great responsibility. Therefore, it is crucial to guide the development of advanced machine learning models in a responsible and ethical way that safeguards against existential risks from “AI has gone rogue.”

The idea of expending more and more energy on computing may sound daunting, but it holds the potential to propel us forward on the Kardashev scale, a measure of a civilization’s technological advancement. If we manage to avoid the pitfalls, increasing computing power could open up a much brighter future for humanity.

Comments are closed.