Sense. Act. Learn. Repeat.
How do we get cars to drive better than us?
The mainstream use of self-driving cars is years away. That doesn’t mean the technology isn’t here. The trick is deploying the software and continuously improving it based on how it performs in traffic. Find out what we’re doing to make cars safer.
First of all, a car needs eyes. It must be able to see. With sensors and cameras, a vehicle can detect all kinds of stationary and moving objects, close by or even hundreds of meters away. This means it can spot a distant deer on the road as well as a ball-chasing kid jumping out from behind a parked car – equally well in darkness as in daylight. It’s amazing what a vehicle can detect with the proper hardware.
Still, none of that would actually be possible without a brain telling the eyes what they’re looking at – and what to do with the information.
Thinking fast and getting it right
So, how fast can a computer make sense of the visual world? Well, this magical feat of calculus is performed in milliseconds, thanks to deep learning methods: algorithms executed in a deep neural network.
We use such networks to extract the correct information – for instance, the exact location of surrounding vehicles or pedestrians – from the imagery provided by the sensors and cameras.
This machine-learning process is similar to how the human brain deals with the visual world. Neural networks are, after all, simulated brain processes mimicking the way biological neurons signal each other. And even if a human on red alert can also avoid imminent danger on the road, the AI is always on, always alert. That’s the crucial difference.
Learning by doing
Now, once the car figures out what it’s dealing with, it must decide how to proceed. Should it brake hard, slow down or accelerate? Should it veer to the side or get back on track?
And here’s the cool part. Whatever happens – an accident, an incident, or a near-incident – the software makes a note of it. This experience will then be stored and added to an ever-growing library with tons of learnings from other vehicles using the same technology.
Towards zero. Together.
The ever-refined traffic data from the fleet then forms the basis for future safety improvements. So when your car gets another software update down the line, it will be able to handle more different scenarios and more complex situations.
The best part? Your car can learn from other cars’ experiences on the road – and vice versa. This means that your vehicle is essentially just an update away from reaping the benefits of all that accumulated traffic knowledge.
Put differently, everyone driving a car using our software contributes to making roads safer. How’s that for crowd-sourced safety?
The Loop
It’s really something else, this continuous development cycle. We call it the Loop. Mastering it lets us make cars gradually more autonomous; as the car learns how to handle different and more complex traffic scenarios, it will be able to drive by itself in more places.
The environment finally declared safe for autonomous driving is called the operational driving domain. For us, growing this domain is critical. One day, when the whole world is our domain, we can relax.
In a nutshell, this is how we train cars to be perfect drivers. This is how we create safety that doesn’t get stressed, tired or lost in thought. This is how we will create the uncrashable car.
Creating the uncrashable car
Vehicle autonomy is expected to offer massive societal benefits, including time savings, increased personal safety, and new mobility options. But with its implications for safety, comfort, and efficiency, AD has proven challenging. Discover our solution.
How can you satisfy real-world safety requirements and not just adhere to laws and traffic rules? This paper proposes that self-driving cars can adjust their driving to external conditions and knowledge of human mistakes. Find out how.
It all ADs up
When training software to work without human supervision, you can’t take anything for granted. There’s just no safety net. No person with one eye on the road. No plan B.
We don’t take shortcuts when it comes to safety. That’s why we have chosen to boost our advanced driver assistance systems with autonomous driving technology.
With less advanced solutions, your car won’t detect that moose, this grandfather, or that child on a bike – not in the dark. It won’t see them in time, and it won’t brake in time. With an AD- and lidar-based approach to safety, however, it will. Your car will detect virtually anything in its path.
And from a great distance, too – which is really the point of this technology. It isn’t the reaction times. It’s the fact that it’s always there, always on, always vigilant to spot dangers so far ahead (several hundred meters, in fact) that you’ll likely be able to steer clear of the situation altogether.
By handing over the difficult driving to an AI, we’re essentially creating a better version of you: a robot with super-human powers watching over you every step of the way. This requires an AD-first mindset – a truly predictive mindset.
There are robotaxis driving around San Francisco all by their lonesome. And that’s awesome. But imagine what it would take to have a car drive itself across the US, Europe, or Asia. If we’re serious about teaching cars to handle anywhere, they have to memorize everywhere.
When enough cars are equipped with the right technology, this mapping of the world will eventually take care of itself. Meanwhile, we have to do it ourselves, using test cars. Our vehicles are constantly collecting sensor data in Europe, Asia, and North America to develop and verify future autonomous drive functionality and validate sensor performance.
The collected data is lidar, radar, and camera data of lane markings, road objects such as vehicles, bicycles, and heavy trucks, as well as geographical position, ego vehicle data, and driver monitoring information. See our schedule below.
Zenseact on the road
During 23-09-18–23-11-03, we will collect data in the following locations:
2023-09-18 2023-10-06 France
2023-09-18 2023-10-06 Germany
2023-09-20 2023-12-31 Kuala Lumpur
2023-09-25 2023-10-27 Phoenix
2023-09-25 2023-10-06 Copenhagen
2023-10-09 2023-11-03 Bilbao
Professional drivers will drive the data collection vehicles, which will be equipped with a platform mounted on the roof containing different reference sensors such as a camera and lidar. You can read more about the information Zenseact collects in our privacy policy