Insights

Stories - May 15, 2025

Keeping AI on a tight leash

Meet people from different parts of Zenseact as they share their stories, insights, and experiences.

3 min read

Christoffer Petersson, technical expert in deep learning

Tell us about yourself

I’m a technical expert in deep learning at Zenseact and an adjunct associate professor at Chalmers University of Technology. I work at the intersection of research and product development in AI, focusing on data and learning to enable more scalable, intelligent, and safer autonomous driving and driver-assistance systems.

Why do we use AI in our software?  

We use AI to make our software more intelligent, robust, and ultimately safer. Many critical tasks for safe driving, like detecting and classifying objects, are better handled by AI models than traditional methods. These models offer the level of accuracy and robustness we need to ensure our software operates safely and compliantly in real-world environments.

Our software is partly based on AI. How do we ensure that the AI-driven functions behave correctly? 

We invest significant time and effort into validating our AI models. Importantly, we don’t let our software act blindly based on AI outputs. While AI helps with perception tasks such as detecting objects, estimating velocity, predicting behavior, and proposing how our vehicle should drive, we have an additional safety layer – guardrails – that check these proposals for consistency, safety, and compliance with traffic rules. This allows us to leverage the strengths of AI without compromising safety.

What is meant by “end-to-end driving”? 

End-to-end driving refers to using one single, large AI model that takes raw sensor data and directly outputs driving commands like steering, acceleration, and braking. While we use AI to propose driving actions, we always apply safety guardrails to verify the proposals and have backup systems to ensure safety. So, decisions never go directly from AI to execution without oversight.

Are we using AI for all parts of the software? 

No, we are not using AI for all parts of the software. AI is excellent for tasks like object detection, prediction, and driving proposals. However, real-world traffic can present unexpected and rare events that AI may not have seen in training. That’s why we combine AI with traditional software engineering, safety guardrails, and fallback mechanisms to handle edge cases reliably.

What role does human oversight play in our software? 

Human oversight is essential. Our engineers define safety boundaries, validate AI models, and take responsibility for ensuring the software is safe. We also use human feedback to improve training data. Ultimately, while AI is a powerful tool, it’s the people who ensure that our software performs safely and in compliance with regulations.