Feeling safe is vital for our physical, emotional, and psychological well-being, and it fundamentally affects our existence and experiences. But it isn’t always so straightforward.
Take the pandemic. Many of us tried to stay safe by staying home, but this also meant dealing with feelings of confinement, limitation, and anxiety. So, safety isn’t always about easy comfort. Still, there’s no denying that being safe is something most of us aim for, as it plays a massive part in our overall happiness and well-being.
Consequently, “safe” is a powerful word we like to throw around. Using it in a relative sense implies that something is comparatively freer from danger or harm than something else. For example, “My car is safer than yours because it has airbags.” Fair enough. However, when used in an absolute sense, we claim that something is entirely free from danger or harm – as in, “My car is safe because it has airbags.”
Well. However desirable, the notion of absolute safety is, of course, a fallacy. Risk and danger exist in every aspect of life. Yes, you can feel relatively safe from being attacked by a snake in an igloo, but there’s no possible situation where you are completely safe from everything. “Safe” simply doesn’t imply a definitive end state, unlike intrinsically absolute words like “empty,” “dead,” or “closed,” which represent an undisputed finality. An empty glass can’t become emptier, and a closed door represents the highest degree of closure possible. And you rarely come across hotel reviews raving about the hygroscopic quality of their towels: Man, the Waldorf Astoria had the driest towels ever! (Though that might have more to do with the utter lack of relevance than the semantic properties of “dry.”)
In any case, the word “safe” behaves more like “big,” “hot, “strong,” or “heavy.” It’s context-specific, subjective, and must be clearly defined to be used definitively. Getting someone to take your word for it requires a bit of convincing. And that can be a complicated exercise, not least from the perspective of self-driving cars.
How do you even define “safe” for something we collectively have little practical experience with? What assurances can be made for something that doesn’t yet fully exist?
Even if we, in theory, manage to devise suitable safety definitions for various levels of autonomy, will these align with the general understanding of safety? What safety standards are people generally prepared to accept, and how much will their understanding of and interest in new technology influence their capacity to embrace uncertainty? Put differently, will “safe enough” mean the same thing for (all) users and producers? Probably not. And since the proof is in the pudding, ensuring the absence of unreasonable risk is a theoretical exercise.
To be sure, today’s popular view is that autonomous vehicles will change the world as we know it. They’re expected to offer enormous societal benefits, including time savings, increased personal safety, and mobility options for non-drivers. Some even predict that this technology will change our conception of mobility, with the most significant impact on car ownership and public transportation use ever seen.
But with its safety implications, the development of autonomous driving has proven to be a challenge. Yes, human driving kills almost 1,5 million people every year. In addition, 50+ million get injured, with suffering, sorrow, and costs piling up. And yes, automation is widely recognized for its potential to reduce human errors in traffic.
Still, safety assurance for AVs is a complicated matter, and solving this puzzle requires considerable technology development and divergent thinking about how vehicles are designed, deployed, and continuously updated. And this isn’t even mentioning the jungle (junglified still by the increased use of AI) of legal, regulatory, ethical, and societal challenges. In short, it’s a mess.
Naturally, this is the rabbit hole we decided to go down.
With our team of roughly 500 developers, researchers, and engineers, we’re using deep learning and software engineering to design, develop, and deploy automated driving technology. Our software helps prevent accidents and incidents on the road, boosting safety for everyone involved; not only does it protect the car’s occupants, but it also safeguards fellow drivers, motorcyclists, cyclists, and pedestrians.
Of course, we also care about moose and other wildlife, albeit with a focus on minimizing the injuries they could cause in a crash. One day, when automation is so vigilant and precautionary that cars can sense and anticipate even the most minor disruption around them at a sufficient distance, even badgers might be free to socialize with their buddies across the highway whenever they please. But we’re not quite there yet.
At any rate, safety remains a significant hurdle in achieving mainstream adoption of autonomous driving. And as car safety software for autonomous driving is our contribution to a better society, we recognize our responsibility to provide a clear definition of it. We must articulate our safety approach and explain why we’re equipped to help put reliably safe and efficient automation on the streets.
Creating such clarity would, however, necessitate a comprehensive safety report outlining the rules, regulations, and standards we follow – and the measures we implement for compliance. We would also have to quantify risk norms and differentiate between minor and severe injuries, defining acceptable and unacceptable risks.
And we do all those things. Of course we do. And you’ll be able to read about them here – in the Deeper learning forum. Because this is where we discuss (at least some of) the complexities AD systems face and the aspects we at Zenseact must consider as we strive to develop safe automation.
So: please help yourself to more reads, podcasts or videos.