LGN.AI: Shaping the Future of Real-World AI
Reality is messy. Environments keep changing, new trends emerge, and old data points become unreliable—and machine learning needs to deal with this. Model degradation is a severe issue when deploying machine learning models in the real world.
LGN addresses precisely this issue by building the world’s first continuous learning loop for edge AI applications: the open-source, cloud-native framework Neuroform. It pushes the future of computing by orchestrating the learning between AI models on edge devices.
Founded by Daniel Warner and Vladimir Čeperić in 2019, LGN raised a seed round in March 2021 from Trucks Venture Capital, Luminous Ventures, InMotion Ventures (by Jaguar Land Rover), and several business angles, including John Taysom and Oliver Cameron. It went through the Intel Ignite program.
Learn more about the future of real-world AI from our interview with the founders, Daniel and Vlad:
Why Did You Start LGN?
We met as part of a startup incubator, where we trained a machine learning model for a laser vision system under harsh conditions. We soon figured that exposing the model to real-world data drastically diminished its accuracy compared to the validation dataset. We constantly retrained the model to keep up with changing environmental conditions.
We didn’t follow through on the laser project. However, we learned that the current way of building AI models is broken and that there might be a big opportunity for building robust models—and more so, for orchestrating the continuous learning between multiple real-world deployments of these models.
Just like humans evolved considerably by inventing languages and exchanging information, imagine how AI models may evolve if they share data and optimize each other.
Where Does The Name LGN Come From?
LGN stands for the lateral geniculate nucleus, a part of the brain’s visual cortex designed to focus attention and swiftly integrate new, unforeseen data. Imagine a child running in front of your scooter—the LGN will immediately pick up and contextualize that visual information.
Similarly, for machine learning models, we focus attention on outlier data points that indicate conditions have changed or new trends are emerging—and allow us to adapt the model. Doing this at scale results in drastically accelerated learning cycles and more robust systems.
How Does Machine Learning On The Edge Work?
Our framework, Neuroform, allows us to monitor models running on edge devices and analyze which new data is confusing. Only this data is sent to the cloud, where it is usually automatically labeled—humans only have to look at edge cases. The original model is then retrained using the newly labeled data and pushed back to the edge device—this is a continuous learning loop.
Training a model involves transfer learning, i.e., training on adjacent tasks and fine-tuning pre-trained models. Models can exchange data efficiently using latent space techniques—a mathematical representation to compress data and identify its key features.
For example, in a computer vision application, the raw camera data is not exchanged; rather, the camera input is compressed into a vector, the latent space representation, which can be later decompressed. Of course, this makes only sense for AI models experiencing similar environmental conditions that are geographically close to each other.
Nowadays, machine learning models are deployed everywhere, from Arm CPUs to Raspberry Pis. Thus, the challenge is to connect many devices, work with different hardware architectures, and deal with the limited connectivity of some models to the internet.
Our grand vision is to connect all these models and make them work better together, reducing data annotation costs and, thereby, cloud costs.
How Did You Evaluate Your Startup Idea?
We talked to potential clients early on and got feedback that the costs of retraining an AI model concern them. We weren’t following a 5-page business plan but rather iterating based on customer feedback and acquiring new customers simply by demonstrating the cost savings from using Neuroform.
For example, we supervised an AI model for optimizing traffic lights in the UK and then deployed the same model in Helsinki. Usually, this would require retraining the entire model and a decent budget. With Neuroform, it took just about a week to adapt to the new conditions, traffic patterns, and daylight times—without any model degradation.
We figured that we could also build machine learning models for our customers. Yet, our goal is not to create a consulting business—that’s why we offer to build custom models free of charge, which Neuroform then maintains. That way, we could demonstrate how much models change over time, and customers could decide whether to keep Neuroform’s supervision or simply keep the AI model.