LGN.AI: Shaping the Future of Real-World AI
Reality is messy. Environments keep changing, new trends emerge, and old data points get unreliable – and machine learning needs to deal with it. Model degradation is a severe issue when deploying machine learning models in the real world – Tesla can attest.
And precisely this issue is addressed by LGN, building the world’s first continuous learning loop for edge AI applications: the open-source, cloud-native framework “Neuroform.” It pushes the future of computing by orchestrating the learning between AI models on edge devices.
Founded by Daniel Warner and Vladimir Čeperić in 2019, LGN raised a seed round in March 2021 from Trucks Venture Capital, Luminous Ventures, InMotion Ventures (by Jaguar Land Rover), and several business angles, including John Taysom and Oliver Cameron.
Learn more about the future of real-world AI from our interview with the founder Daniel and Vlad:
Why Did You Start LGN?
We met as part of a startup incubator, where we trained a machine learning model for a laser vision system under harsh conditions – and we soon figured that exposing the model to real-world data drastically diminished its accuracy compared to the validation dataset. We constantly retrained the model to keep up with changing environmental conditions.
We didn’t follow through on the laser project. However, we learned that the current way of building AI models is broken and that there might be a big opportunity for building robust models – and more so, to orchestrate the continuous learning between multiple real-world deployments of these models.
Just like humans evolved considerably by inventing languages and exchanging information, imagine how AI models may evolve if they share data and optimize each other.
Where Does The Name LGN Come From?
LGN stands for the lateral geniculate nucleus, a part of the brain’s visual cortex designed to focus attention and integrate new, unforeseen data swiftly. Imagine a child running in front of your scooter – the LGN will immediately pick up and contextualize that visual information.
Similarly, for machine learning models, we focus attention on outlier data points that indicate that conditions have changed or new trends emerging – and allow us to adapt the model. Doing this at scale results in drastically accelerated learning cycles and more robust systems.
How Does Machine Learning On The Edge Work?
Our framework Neuroform allows us to monitor models running on edge devices and analyze which new data is confusing. Only this data is sent to the cloud, where it is automatically labeled in most cases – humans only have to look at edge cases. The original model is then retrained using the newly labeled data and pushed back to the edge device – and this is a continuous learning loop.
Training a model involves transfer learning, i.e., training on adjacent tasks and fine-tuning pre-trained models. And models can exchange data efficiently using latent space techniques – a mathematical representation to compress data and identify its key features.
E.g., for a computer vision application, not the raw camera data is exchanged, but rather the camera input is compressed into a vector, the latent space representation, which can be later decompressed. Of course, this makes only sense for AI models experiencing similar environmental conditions/that are geographically close to each other.
Nowadays, machine learning models are deployed everywhere, from Arm CPUs to Raspberry Pis. Thus, the challenge is to connect many devices, work with different hardware architectures, and deal with the limited connectivity of some models to the internet.
Our grand vision is to connect all these models and make them work better together, reducing data annotation costs and thereby cloud costs.
How Did You Evaluate Your Startup Idea?
We talked to potential clients early on and got feedback that the costs of retraining an AI model concern them. We weren’t following a 5-page business plan but rather iterating based on customer feedback and acquiring new customers simply by demonstrating the cost savings from using Neuroform.
For example, we supervised an AI model for optimizing traffic lights in the UK and then deployed the same model in Helsinki. Usually, this would require retraining the entire model and a decent budget. With Neuroform, it took just about a week to adapt to the new conditions, traffic patterns, and daylight times – without any model degradation.
We figured that we could also build machine learning models for our customers. Yet, our goal is not to create a consulting business – that’s why we offer to build custom models free of charge, which are then maintained by Neuroform. That way, we could demonstrate how much models change over time, and customers can decide whether they want to keep the supervision by Neuroform or simply keep the AI model.
Who Should Contact You?
We are always looking for excellent people to join our team. In addition, please reach out if your company needs help deploying AI models in the real world – or if you’d like us to design a custom AI model for you.
Funny enough, we often get to talk not only to heads of product/data scientists but also CFOs – as we are much cheaper than retraining an AI model from the ground up.
Further Reading
LGN, which helps companies deploy AI at scale, raises $2M – Post by VentureBeat on the seed round of LGN
Why Continual Learning is the key towards Machine Intelligence – Medium article by Vincenzo Lomonaco on the importance of continual learning, founder of the non-profit ContinualAI
Understanding Latent Space in Machine Learning – Medium article by Ekin Tiu