SafeAD: Shaping the Future of Computer Vision for Autonomous Driving

SafeAD: Shaping the Future of Computer Vision for Autonomous Driving

What do ChatGPT and computer vision for autonomous driving have in common? Both employ a similar design for the underlying neural networks, the transformer architecture. This model architecture might not only allow writing nice poems but also make large-scale autonomous driving a reality. 

Autonomous vehicles used to be pure sci-fi, but today they are already operating under very controlled conditions, from metro systems to space flights. However, autonomous cars that can drive anywhere still face enormous challenges, especially as they need to take all the different traffic rules, weather conditions, and car types into account. 

SafeAD was founded in 2021 by Julius KümmerleTilman Kühner, and Niels Ole Salscheider, based on a decade of research at the Karlsruhe Institute of Technology to make large-scale autonomous driving a reality for cars. The company’s ambition is to build an AI system that can analyze and understand any driving situation without needing to hardcode streets or other road users. Realizing such a global scene understanding will be the basis for steering cars safely and autonomously. 

Learn more about the future of computer vision for autonomous driving from our interview with the co-founder and CEO, Julius Kümmerle

Why Did You Start SafeAD?

You could do many things in life, but often you find yourself doing stuff other people told you to do. Starting a startup was partly motivated by the ambition to have complete freedom, realize our own ideas, and not have to ask a boss for permission or wait for the approval of a budget. Of course, startups have their limits, but we think as a team about what we want to do and then give it a try.

Because we have the freedom to decide for ourselves how and what we work on, everyone is highly motivated and very creative, and I would never give away that freedom. As a mindset, I don’t see it as work but rather something I really want to do – and then you simply go for it. 

Also, as techies, we love to work on cool technologies and solve new problems! We met as a team during our Ph.D. studies at the Karlsruhe Institute of Technology, and for years, we studied autonomous driving in depth. As the technology evolved, we seized the moment and put our learnings into practice to positively impact society. 

How Does Computer Vision for Autonomous Driving Work?

We focus on making the car see and understand the environment around it. It’s not only a question of where other cars or pedestrians are, but also how they relate to one another. Developing true scene understanding is a big next step for autonomous driving. It requires going from single detection tasks to understanding the environment as a whole, gathering information from all the sensors to develop this global understanding. 

Our approach is end-to-end perception – to produce a perception output derived from the initial raw data. End-to-end driving, which produces actuator outputs to steer the car directly, has major problems around safety, interpretability, and certification. Although some companies have raised a lot of money, the challenges around end-to-end driving are enormous. 

With end-to-end perception, our goal instead is to develop this global scene understanding of what objects are there and how they relate to each other. In the last two to three years, there has been a big shift in perception methods and, of course, the rise of AI. Camera perception processing has gone from the image plane to including additional, intermediate representations of the data. And sensor fusion has turned out to be crucial for autonomous driving: To have a neural network fuse of all the raw data from cameras, radar, and LiDAR by applying a vision transformer and have all the data at hand in the same representation. 

Then one can apply additional neural networks to get ‘next-gen capabilities,’ like understanding the road topology or figuring out what traffic drives on what lane. Ultimately, the goal is to have one large model for the environment that models all the interactions indirectly, that is, without a human having to specify the behavior manually. 

At some point of complexity, hand-coding rules don’t work anymore, and it’s just about scaling the model. We benchmark the system performance and accuracy of the perception output and also check traffic rules, speed limits, physical limits, or even the statistics of how cars usually drive. 

It’s hard to say what will drive the greatest improvements in autonomous driving – pun intended. Autonomous driving will come with gradual improvements: Ideally, I want to update my car with improvements every day. 

Improvements in computing, larger models, and an enormous amount of research around novel machine learning methods such as neural radiance fields (NERF – a method to synthesize complex scenes) push the state-of-the-art. They will enable cars with greater autonomy and allow cars to drive in more scenarios. Yet, fully autonomous cars that can drive everywhere are really a moonshot, and with current technology and computing power, it may still take many years to get there. 

How did you evaluate your startup idea?

We figured that autonomous driving today is not so much about that big dream of a fully autonomous car driving through Phoenix, where Waymo showed that cars could drive autonomously under very specific conditions. In our view, for now, it’s more about making existing cars safer through driver-assistant systems and achieving partial autonomy, like Mercedes did for level 3 autonomous driving. 

An important differentiation there is between hands-off and eyes-off driving: hands-off is convenient, but if you still need to monitor the car actively, that’s stressful. The first level 3 autonomous cars will allow you to take your eyes off the road. When a critical situation occurs, you get an alarm and 10 seconds to focus back on the street. This is a massive comfort improvement and a viable approach for large-scale deployment, where currently lots of money lies. And it’s still a huge technological challenge to have these guarantees and ensure interpretability: for the car to figure out where it can handle the scene or has to give an alarm.

Throughout our Ph.D.s, we worked with German OEMs and tier-1 suppliers on research projects with commercial ties, where the motivation was to have the research findings implemented in the next generation of cars. So we knew there was interest from the industry. But when you get out of the university as a small startup, no one knows you, no one has been waiting for you, and everyone wonders what you’re doing with this startup thing. 

Breaking into the automotive industry took us building a strong network and talking to the right people. We thought, as we come from research and know what cool technology can do and large companies don’t have it, they should be interested. But we learned that all the cool technology is worthless without having the right connections. 

People won’t even tell you that they’re interested in your technology unless they know you. You need to talk to established industry experts, the ones that built big departments with 200 people and had to manage big projects with big budgets. Once they got to know us and evaluated our technology, we landed our first projects with real customers, which helped us a lot to tailor our solution to customer needs. 

What Advice Would You Give Fellow Deep Tech Founders?

Look out for the people that can open doors for you. We just recently got a chance to include such an industry expert in our team, and now industry people are listening to us. They are not cheap, and you might need to give up some equity, but it’s really worth it. Think about that right at the beginning: who can help you? And don’t keep what you’re working on secret – only then can people help you. 

Currently, there are so many changes in the automotive industry, with lots of opportunities arising in a tough market, where companies start to think differently: It’s not just about selling a car but putting sensors everywhere that enable new, data-driven business models. With computing power rising, you can do many more things than just driving a car. 

Further Reading

Founder of the month: SafeAD – Article by the KIT Gründerschmiede, the startup initiative by Karlsruhe Institute of Technology

SafeAD wins start-up pitch – Press release about SafeAD winning the startup pitch of the German Federal Ministry of Digitalization and Transport (#BMDV) by KIT Gründerschmiede