The widespread availability of unsupervised automation for consumer cars is years away. But that doesn’t mean the fundamental technology isn’t here. Instead, the challenge lies in deploying the software and continuously improving it based on its performance in traffic.
There are various attempts to address safety for automated vehicles. One option is only to implement a rigid, rules-based strategy. Such a policy simplifies identifying responsibility and fault in case of an accident. But is this more important than avoiding an accident in the first place? Alternatively, you can embrace a more dynamic perspective that dictates that all accidents are preventable if you develop technology capable of managing rule-breaking and unforeseen scenarios.
Although challenging, this is the path forward for us. Guided by human-centric values, we lay the foundation for our safety philosophy. And this goes beyond moral grandstanding.
In essence, it’s a matter of control. At Zenseact, we develop software for every critical component of the AD system – from how sensors perceive and understand the environment to the actuation system telling the car what to do. This comprehensive involvement enables us to rapidly improve, design, and deploy software based on real-world conditions. As a result, we often describe this safety infrastructure as “from pixel to torque” or “from sensing to actuation.”
Controlling the complete software stack might also offer a modicum of comfort in a world increasingly dominated by machines: for someone to relinquish complete control of driving to the system, someone else should maintain full control over that same system. This homespun axiom isn’t meant to stoke fears of singularity, you know, the point when machines become uncontrollable as they realize they can self-improve without human assistance. It’s intended more as a warm blanket, a reassurance that we’re there with you every step of the way.
In any case, this level of control allows us to create a safer product. Ultimately, it helps us save more lives.
This “setup” is what enables us to develop and implement safety software improvements quickly, and why our company motto is “Towards zero. Faster.” Previously, getting a safer car meant buying a new one. Nowadays, with regular updates, vehicles can be improved long after production – much like telephones or computers. How fast we can make cars safer largely depends on our ability to harness these update capabilities. By accelerating improvement loops, shortening development cycles, and deploying high-capacity software to the fleet frequently, we can enhance vehicle safety at an increasing pace. Put differently; safety is no longer only defined by hardware. Active safety is the software. And we control that all the way.
That, in a nutshell, is our secret sauce. But let’s start from the beginning. Let’s look at what “From pixel to torque” means.
First, a car needs eyes. If it’s going to drive by itself, it must be able to see the world around it. With sensors, a vehicle can detect stationary and moving objects close by or even hundreds of meters away. It can spot a distant deer on the road and a ball-chasing kid jumping out from behind a parked car, even in the dark. Furthermore, it can determine where the path is free from obstacles while considering the risk that something might appear suddenly. You may not detect pedestrians hidden behind a parked truck, but you should still anticipate their potential presence, right? That’s a precautionary approach to safety. And that’s what the car must be able to do.
Still, all that hardware would be useless without a brain (i.e., a central processor) telling the eyes what they’re looking at – and what to do with the information (i.e., interpreting the visual data and determining the appropriate actions): should it brake hard, slow down or accelerate? Should it veer to the side or get back on track?
So, how fast can a computer make sense of the visual world? Well, this magical feat of calculus is performed in milliseconds, thanks to deep learning methods: algorithms executed in a deep neural network. We use such networks to extract the correct information – for instance, the exact location of surrounding vehicles or pedestrians – from the imagery provided by the sensors and cameras. This machine-learning process is similar to how the human brain deals with the visual world. Neural networks are, after all, simulated brain processes mimicking the way biological neurons signal each other.
As easy as one-two-three?
Not exactly. But a standard declaration – “We use advanced deep learning and cutting-edge technology to push the boundaries of automated driving.” – doesn’t quite cut it. Sure, we often refer to ourselves as an AI company, and rightly so; the very premise (and promise, for that matter) of our solution to make cars drive better than humans rests on our ability to harness the power of deep learning. But it’s so much more than that. Yes, we use AI (in, fact it’s pretty safe to say that we’re pioneers in the field of applied automotive AI) to enhance our software, but we also have a few other tricks up our sleeve.
Firstly, we’re owned by Volvo, the world’s safest car brand. Building on that legacy, experience, and knowledge in car safety reminds us never to compromise on safety. Secondly, there’s “a rejäl dos jävlaranamma” involved (not exactly translatable but conveying the concept of “grit,” in a grit-filled way), rooted in a deeply ingrained culture of thinking outside the box. There’s more. Our infrastructure designed so elegantly to empower our engineers. The ongoing quest for improved efficiency. The staggering number of patents we file each year. The collaborations. The research. The workplace culture. The list goes on.
Anyway, let’s return to the topic of the software. Here’s the beauty of it: when installed in a car, our software logs any occurrence – whether it’s an accident, an incident, or a near-miss – and adds this experience to an ever-expanding library filled with information from other vehicles using the same technology. These deep learning models become smarter over time; the more real-life data we feed into the algorithms, the more reliable and accurate they become. Notably, the system is always active, collecting data – even before the vehicle assumes control of driving. This ensures we’ll confidently know when the car is ready to take over.
Now, constant refinement needs to be chopped up into bite-sized updates. Once we determine that the system has been trained on sufficient data to manage specific traffic scenarios or address certain driver behaviors – when it has acquired the skills to handle a set of new situations safely and has demonstrated its reliability – it’s time to convert those learnings into a software update and distribute it to the fleet.
Consequently, when a car receives a software update, it will be equipped to handle a broader range of scenarios and more complex situations than before. For instance, you may discover that a route you previously needed to drive for yourself can now be taken over by your car.
There is a remarkable sense of unity in this system: cars equipped with our software can learn from each other’s experiences on the road. This means that a vehicle is essentially just an update away from reaping the benefits of all that accumulated traffic knowledge. At the same time, your own driving contributes to making other cars safer. So, as more people drive around, overall safety increases for everyone. How’s that for crowd-sourced safety?
The continuous development cycle explained above, this machine-learning loop of sensing, acting, learning, and repeating the process, is a powerful thing. Mastering it enables us to enhance a car’s autonomy progressively. As the vehicle optimizes its capacity to manage various and increasingly complex traffic scenarios, it can drive autonomously in more locations. The environment deemed safe for autonomous driving is called the operational driving domain. Expanding this domain is crucial for us. Once the entire world falls within our domain, we can finally relax.
In a nutshell, this is how we train cars to be perfect drivers. This is how we train our deep learning models on real-world data to create safety that doesn’t get stressed, tired, or lost in thought. This is how we will help usher in a new era of automation. In the meantime, our AD advancements are directly implemented in the driver support system we deliver today.
Much is to be discovered and improved upon regarding the safety of autonomous vehicles. While AVs have the potential to reduce the number of accidents caused by human error drastically, they also present unique safety challenges that must be addressed before they can be deployed on a large scale. However, as technology advances and more data is gathered, our researchers and engineers will better understand the safety risks and how to mitigate them.
But there’s so much more to talk about! There’s the (somewhat infected) debate about the different levels of autonomy; we can go deeper into how AD-proof technology makes ADAS better; we can discuss the role of deep learning in autonomous driving today and in the future (are we going full hive-mind?); there are hyper-important topics around AI, cybersecurity and personal integrity; and what about the aftermath of an accident? Finally, where do we stand on future driver behavior (will drivers still take over the wheel – and can they)? And what about the democratization of safety? When will technology developments reach those who need it the most? There are several important topics to explore.
One last thing before you depart:
Our approach to safety is built on a profound sense of responsibility, grit, compassion, and trust in technological progress. This foundation supports every act of innovation and collaboration, with all our progress designed for one purpose: to save lives in traffic.
Stay tuned.