Safer driving through AI
Mohammad Ali, Lead Architect at Zenseact, has been part of our journey since the early Zenuity days, sitting at the intersection of technology and safety. From understanding how AI learns to drive to ensuring it behaves safely on real roads, he’s focused on one goal: creating software that can handle the full range of real-world traffic – safely, smoothly, and reliably.
4 min read
Tell us about yourself
My name is Mohammad Ali, and I’m the Lead Architect at Zenseact. I’ve been here since the early Zenuity days, working with active safety systems. I’ve always been deeply interested in technology – curious not just about the details but also about the big picture. I enjoy diving into rabbit holes, but I also appreciate stepping back to see how everything connects.
Why do we use AI in our software?
The short answer is: because it works and because we have to. Machine learning has proven to be incredibly powerful compared to traditional methods. In many areas, it outperforms classical techniques, which is why we rely on it so heavily.
Are we using AI for all parts of the software?
No, not for everything. The product we’re currently working on is designed to have a large part driven by neural networks. These take in raw sensor data – pixels from cameras, LiDAR point clouds, radar detections – and produce a trajectory: a path and speed profile for the car to follow. But on top of that, we have other software layers that check and ensure the trajectory is safe. They make sure the car doesn’t go off the road or collide with anything, and they can override the AI if needed.
Why do we need AI if we can just write code for every traffic situation?
Because we can’t. The number of possible situations in real traffic is enormous. Trying to write manual logic for every “if this, then that” case would be impossible. Driving from Lindholmen to downtown Gothenburg is one thing; driving through Los Angeles is another. The combinations of conditions and interactions are too complex for human-written rules. AI is the only scalable way to handle that variety.
Our ambition is to build software that can handle the full variety of real-world traffic situations. The complexity of traffic is simply too great for hand-written code. The only way forward is to use data-driven methods that can learn from experience and scale with more data, the “bitter lesson,” as it’s sometimes called. That’s how we’ll make truly capable and safe driving systems.
What is meant by end-to-end driving?
End-to-end driving means the system takes raw sensor input and produces a complete driving plan as output. In our case, that means starting from camera pixels, LiDAR point clouds, and radar detections, and ending with a trajectory: a path and a speed profile. That’s the “end-to-end” process we’re designing right now.
How do we ensure that AI-driven functions behave correctly?
That’s a big challenge. We’ll use what we call guardrails: software layers that constantly monitor and verify what the neural network wants to do. These guardrails check that the planned trajectory doesn’t, for example, hit another car or break traffic rules, and they can override it when needed.
I once showed a video where a car began moving just before a light turned green. The network had learned that behavior from data – it predicted the green light. It was understandable, but still illegal. A guardrail in that case would detect the red light and stop the car from accelerating, even if the AI “thought” it should go.
What role does human oversight play in developing our software?
Human oversight is essential. People are involved in everything from annotating data to reviewing results and defining tests. One of our biggest challenges is that retraining a neural network can change its behavior in unpredictable ways. So we rely heavily on automated tests – and on humans designing those tests – to make sure we keep the behaviors we want after every new training round. Also, like I said, we have manually crafted software layers that monitor and verify what the neural nets want to do.
What makes you smile at work?
Many things. I love exploring deep technical details, but I also enjoy seeing everything come together in the car. When a new feature works better than expected, that’s a great feeling. Just recently, I was in a demo car that yielded early to another vehicle merging in front of us. I hadn’t realized that behavior was active yet; it surprised me in a good way. That kind of moment makes me smile.
