Qdrant: Shaping the Future of Neural Search & Metric Learning

Have you ever used Google’s image search? Instead of searching for keywords, you upload an actual image, and the algorithm finds similar images—based on a machine learning algorithm called vector similarity search. 

It is one example algorithm for neural search: using deep neural networks for solving search problems—rather than looking through all books in a library yourself, you ask the librarian (aka neural search engine) to find some information for you.

Qdrant builds an open-source neural search engine, thereby bringing an entire domain of machine learning called deep metric learning into applications such as search, match-making, or recommender systems. Founded by Andre Zayarni and Andrey Vasnetsov in May 2021, Qdrant just closed a €2M pre-seed round in December 2021 led by 42CAP, IBB Ventures, and several business angles. 

Learn more about Qdrant from our interview with their co-founder and CEO, Andre Zayarni:

Why Did You Start Qdrant? 

We were looking into neural search, specifically vector similarity search, while building an HR product for our previous employer. There are already many libraries by Google, Amazon, or Spotify, and Milvus, a startup from China, who implemented this method. 

However, none of these could match the requirements for our use case. In addition, while libraries are publicly available, they cannot be readily used and deployed in a production environment. Qdrant is like Elasticsearch or MongoDB, building an actual product with an interface around these libraries—and some custom features such as complex filtering, including geolocation. 

How Does Vector Similarity Search Work? 

At first, an encoder (= a special type of neural network) turns any input like text, images, or even audio/video into a vector. Next, our neural search engine compares these vectors. How? Well, in the simplest case, you could calculate their scalar product, but that’s only the tip of the iceberg. In the end, this allows you to answer queries such as finding similar vectors and, thus, semantically similar inputs (e.g., images).

One example would be finding similar dishes given your favorite dish: An image of your favorite dish is used as an input, and images of similar dishes will be recommended as an output (look at our online demo). Another example is finding similar startups based on natural language descriptions (another online demo).

Going even beyond our neural search engine, Qdrant brings a new domain of machine learning, so-called metric learning, into applications. Metric learning is machine learning based on the distance between two data points (as measured by a metric), e.g., how similar vectors corresponding to different images are, rather than learning all the labeled image data. 

Machine learning for classification problems requires vast amounts of labeled data and retraining once more data is included. On the contrary, metric learning needs fewer resources and maintenance—and Qdrant’s mission is to implement metric learning as a viable alternative to classification.  

How Did You Evaluate Your Startup Idea? 

We started Qdrant as a personal project as we were passionate about open-source software and using Rust as a programming language. However, we soon figured we could use it to match job descriptions, and others may also be interested. 

We built a website, put a demo online, published our code on GitHub—and got lots of positive feedback. At this point, we realized that we were on to something and started interviewing. 

One growth hack we did was to go through this list of 250+ German AI startups and message some of them that we thought might be interested in using our technology. Interestingly, many startups whom we had not even contacted contacted us and asked about our technology.

Now that we have raised our pre-seed round, our goal is to develop our product so that it can handle reliably vast amounts of data on an enterprise level. Check out our product roadmap.