Safety is fundamental to human existence, encompassing not just our physical well-being but our emotional and psychological states as well. When we talk about safety, we usually picture comfort, protection, and the absence of danger. But it’s not always so straightforward. Take the pandemic, for example. We all tried to stay safe by staying home, but this also meant dealing with feelings of confinement and limitation. So, safety isn’t always about easy comfort. Still, there’s no denying that being safe is something we all aim for, as it plays a huge part in our overall happiness and well-being.
Consequently, “safe” is a powerful word that we like to throw around. When we employ it in a relative sense, it casually implies that something is comparatively freer from danger or harm than something else. For example, “My car is safer than yours.” However, when used in an absolute sense, we claim that something is entirely free from danger or harm – as in, “My car is safe to drive.”
Well. Define safe.
However desirable, the notion of absolute safety is a fallacy. “Safe” does not imply a definitive end state, unlike intrinsically absolute words like “empty,” “dead,” or “closed,” representing an undisputed finality. An empty glass can’t become “emptier,” and a closed door represents the highest degree of closure possible. And you rarely come across hotel reviews raving about the hygroscopic quality of their towels: Man, the Waldorf Astoria had the driest towels ever! (Though that might have more to do with the utter lack of relevance than the semantic properties of “dry.”)
In any case, the word “safe” behaves more like “big,” “hot, “strong,” or “heavy.” It’s context-specific, subjective, and must be clearly defined to be used definitively.
Indeed, risk and danger exist in every aspect of life. Yes, you can feel relatively safe from being attacked by a snake in an Igloo, but there’s no possible situation where you are completely safe from everything. Therefore, we balance our pursuit of safety with recognizing the inevitability of uncertainty and a willingness to take calculated risks.
So, the questions are: to what extent will our general conception of safety affect the widespread adoption of self-driving cars? How much does our trust in technology play into this? How do we perceive the risks involved – and what kind of safety norms are we prepared to embrace?
Put differently, what will it take for us to comfortably pack our family into a car and let it drive itself onto Autobahn on a dark and rainy night?
Today’s popular view seems to be that autonomous vehicles (AV) will change the world as we know it. They’re expected to offer enormous societal benefits, including time savings, increased personal safety, and mobility options for non-drivers. Some even predict that this technology will change our conception of mobility, with the most significant impact on car ownership and public transportation use ever seen.
But with its safety implications, the development of autonomous driving has proven to be a challenge. Yes, human driving kills almost 1,5 million people every year. In addition, 50+ million get injured, with suffering, sorrow, and costs piling up. And yes, automation is widely recognized for its potential to reduce human errors in traffic. Still, safety assurance for AVs is a complicated matter, and solving this puzzle requires considerable technology development and divergent thinking about how vehicles are designed, deployed, and continuously updated. And this isn’t even mentioning the jungle of legal, regulatory, ethical, and societal challenges.
Naturally, this is the rabbit hole we decided to go down.
With a team of roughly 500 developers, researchers, and engineers, we’re harnessing the power of machine learning and software engineering to design, develop, and deploy automated driving technology. Our advanced software helps prevent accidents and incidents on the road, enhancing safety for everyone involved; not only does it protect the car’s occupants, but it also safeguards fellow drivers, motorcyclists, cyclists, and pedestrians.
One might say our concern also extends to moose and other wildlife, albeit with a focus on minimizing the injuries they could cause in an accident. One day, perhaps, when automation is so vigilant, so precautionary, so human-like that cars can sense and anticipate even the smallest disruption around them at a sufficient distance, even badgers might be free to socialize with their buddies across the highway whenever they please. But we’re not quite there yet.
At any rate, safety remains a significant hurdle in achieving mainstream adoption of autonomous driving. And as car safety is our signature dish, we recognize our responsibility to provide a clear definition of it. We must also articulate our safety approach and explain why we’re exceptionally equipped to help put safe, reliable, and efficient automation on the streets.
Creating such clarity would, however, necessitate a comprehensive safety report outlining the rules, regulations, and standards we follow – and the measures we implement for compliance. We would also have to quantify risk norms and differentiate between minor and severe injuries, defining acceptable and unacceptable risks.
And we do all those things. Of course we do. But we’re not getting into that here. Over a series of posts, we’ll merely scratch the surface of what AD systems face and the aspects we at Zenseact must consider as we strive to develop our technology. Like the broader AI discussion, autonomous driving most definitely sparks its fair share of philosophical debate.
Our aim here is to provide some insights into this discussion.