Zenseact logo neg

Down the rabbit hole

Safety is vital for our physical, emotional, and psychological well-being, and it fundamentally affects our existence and experiences. Guaranteeing it, however, isn’t always so straightforward.

There’s no denying that being safe is something most of us aim for, as it plays a massive part in our overall happiness and well-being. Consequently, “safe” is a powerful word we like to throw around. Casually using it in a relative sense implies that something is comparatively freer from danger or harm than something else. For example, “My car is safer than yours because it has airbags.” Fair enough. However, when used in an absolute sense, we claim that something is entirely free from danger or harm – as in, “My car is safe because it has airbags.”

Well… However desirable, the notion of absolute safety is, of course, a fallacy. Risk and danger exist in every aspect of life. Yes, you can feel pretty safe from being attacked by a snake in an igloo, but there’s no possible situation where you are completely safe from everything. “Safe” simply doesn’t imply a definitive end state, unlike intrinsically absolute words like “empty,” “dead,” or “closed,” which represent an undisputed, and not seldom sad, finality. An empty glass can’t become emptier, and a closed door represents the highest degree of closure possible. And you rarely come across hotel reviews raving about the hygroscopic quality of their towels: Man, the Waldorf Astoria had the driest towels ever! (Though that might have more to do with the utter lack of relevance than the semantic properties of “dry.”)

In any case, the word “safe” behaves more like “big,” “hot, “strong,” or “heavy.” It’s context-specific, subjective, and must be clearly defined to be used definitively: what do you mean by safe? Getting someone to take your word for it often requires a bit of convincing – just how much depends on an infinite number of variables. And that can be a complicated exercise, not least from the perspective of self-driving cars.

How do you define “safe” for something we collectively have little practical experience with? What assurances can be made for something that doesn’t yet fully exist?

Even if we, in theory, manage to devise adequate safety definitions for various levels of autonomy, will these align with the general understanding of safety? What safety standards are people generally prepared to accept, and how much will their understanding of and interest in new technology influence their capacity to embrace uncertainty? Do they even care? Put differently, will “safe enough” mean the same thing for (all) users and producers? Probably not. The proof is in the pudding – we need cars out there to show what applied road safety looks like. Until that happens, ensuring the absence of unreasonable risk is a theoretical exercise.

To be sure, today’s popular view seems to be that autonomous vehicles will fundamentally change personal and commercial transportation. They’re expected to offer enormous societal benefits, including time savings, increased personal safety, and mobility options for non-drivers. Some even predict that this technology will change our conception of mobility, with the most significant impact ever seen on car ownership and transportation use.

But with its safety implications, the development of autonomous driving has proven to be a challenge. Yes, human driving kills almost 1,5 million people every year. In addition, 50+ million get injured, with suffering, sorrow, and costs piling up. And yes, automation is widely recognized for its potential to reduce human errors in traffic; the principle seems to be that the more automated, the safer. Still, recent robotaxi debacles hardly help public perception about the safety of self-driving cars. And there are countless examples of vehicles (involving big brands and small ones) not capable of handling the chaos of traffic by themselves. Put differently: AD safety shouldn’t be a race.

Indeed, safety assurance for AVs is a complicated matter, and solving this puzzle requires considerable technology development and divergent thinking about how vehicles are designed, deployed, and continuously updated. And this isn’t even mentioning the jungle (junglified still by the increased use of AI) of legal, regulatory, ethical, and societal challenges. Quite frankly, it’s a mess.

Naturally, this is the rabbit hole we decided to go down.

With our team of roughly 500 developers, researchers, and engineers, we’re using deep learning and software engineering to design, develop, and deploy automated driving technology. Our AI-powered software helps prevent accidents and incidents on the road, boosting safety for everyone involved; not only does it protect the car’s occupants, but it also safeguards fellow drivers, motorcyclists, cyclists, and pedestrians.

Of course, we also care about moose and other wildlife, albeit with a focus on minimizing the injuries they could cause in a crash. One day, when automation is so vigilant and precautionary that cars can sense and anticipate even the most minor disruption around them at a sufficient distance, even badgers might be free to socialize with their buddies across the highway whenever they please. But we’re not quite there yet.

At any rate, safety remains a significant hurdle in achieving mainstream adoption of autonomous driving. As car safety software for autonomous driving is our contribution to better mobility and more sustainable society, we recognize our responsibility to provide a clear definition of it. We must articulate our safety approach and explain why we’re equipped to help put reliably safe and efficient automation on the streets.

Creating such clarity would, however, necessitate comprehensive safety reports outlining the rules, regulations, and standards we follow – and the measures we implement for compliance. We would also have to quantify risk norms and differentiate between minor and severe injuries, defining acceptable and unacceptable risks.

And we do all those things. Of course we do. And you’ll be able to read about them here – in the Deeper learning forum. This is where we discuss (at least some of) the complexities AD systems face and the aspects we at Zenseact must consider as we strive to develop safe automation.

Stay tuned.

10 April 2023
Credits: Christian Sjögreen, Jonas Ekmark and Fredrik Sandblom


Are you located in China? Follow us on WeChat for regular updates!