Seeking Misbehaving Virtual Drivers

Simulating accidents in mixed reality is a critical step for the future of autonomous vehicles.

Simulating accidents in mixed reality is a critical step for the future of autonomous vehicles.

Simulation of how sensors perceived the foreground in different weather scenarios in Dassault Systèmes’ software. Image courtesy of Dassault Systèmes.


On average, one fatality occurred in about 100 million vehicle miles traveled, according to statistics from the “Traffic Safety Facts Annual Report, June 2022” by the National Highway Traffic Safety Administration (NHTSA; cdan.nhtsa.gov). That number remained consistent over a decade (2010 to 2020). Further, the average length of time between collisions is about 10-14 years, according to insurance giant Allstate’s “Allstate America’s Best Drivers Report 2019.”

If you’re a self-driving car developer trying to test and prove the safety of your vehicle, these statistics are not encouraging. They give you a hint of the sheer volume of simulated driving hours necessary to uncover the probable fatal scenarios your vehicle might face, then train it to deal with them effectively.

“Safety critical situations, like accidents, are a low probability in real life, so if you test your autonomous vehicle in the real world, the car doesn’t have enough chances to experience these situations,” says Henry Liu, University of Michigan’s professor of civil engineering and director of Mcity, a public-private transportation and mobility research partnership. Liu also oversees the Center for Connected and Automated Transportation, a regional transportation research center funded by the U.S. Department of Transportation.

Misbehaving Virtual Drivers Wanted

To prove that a self-driving car can handle the safety-critical scenarios it might face, you must subject the car to a wide range of collisions, near misses, close calls and corner cases. Liu and his team use a mixed reality setup in Mcity to accomplish this task. Mcity is a 32-acre mock city and driving ground located in University of Michigan’s North Campus in Ann Arbor, MI. It’s also connected to a 3D digital twin of the environment, which allows the engineers to conjure up any type of use case. Their validation strategy is to create terrible virtual drivers—the kind that you would not want to encounter in the real world—to test the autonomous car’s reflexes.

“We use the collected data on corner cases, not to train the automated vehicle but to train the surrounding virtual vehicles so they know when they should make their maneuvers to challenge the automated vehicle,” Liu says.

Mcity’s architecture is based on Simulation of Urban Mobility (SUMO), an open-source traffic flow simulator. Using the SUMO kernel, Liu and his team replace the default behavior model (which controls the movements of the virtual drivers and pedestrians) with their own machine-learning model, called Naturalistic Driving Environment (NeuralNDE).

Creating Virtual Collisions to Study Them

Siemens Digital Industries’ offerings to the autonomous car developers include Simcenter Prescan, a physics-based sensor simulation system for validating and verifying advanced driver assistance systems (ADAS), and Simcenter SCAPTOR, a hardware-software combo for collecting and analyzing raw data from driving.

“Ultimately you want to import the [captured driving data] into the virtual world to make sure that any variation of the edge scenarios you encountered can be tested in simulation,” says Robbert Lohmann, business development director of autonomous vehicle solutions, Siemens Digital Industries Software.

Lohmann revealed Siemens is currently working on developing a critical scenario creation method that uses the Simcenter products.

“The methodology that we’ve developed is specifically to identify unknown unsafe scenarios,” Lohmann says. “Whereas common practice is to take recorded data of a known safe situation and change the behaviors to create a known unsafe situation [hoping that this was previously an unknown unsafe situation], a systematic approach that also takes into account actors not present and potential routes not recorded is required to really minimize the number of unknown unsafe scenarios.”

The sensor-based approach focuses on what Lohmann calls gates—“all the entries and exits in the scene,” Lohmann says. “Basically, you map out potential trajectories between the exit and entry gates, involving cars and pedestrians traveling at certain speeds. When you put all those factors into the simulation program, you can then use optimization software to find out, based on the criticality and novelty, which scenarios are unknown and unsafe.”

Part of the technology is now patented. “It’s a proof of concept, and it holds great promises,” Lohmann says. The package is in post-beta phase, under testing with select customers.

Another engineering software maker, Dassault Systèmes, offers CATIA SCANeR on the 3DEXPERIENCE platform.

“We use data analysis of existing records on roads and scenario-based simulations to elicit vehicle and embedded system requirements, taking into account dangerous road user behavior early in the process,” say Euriel Malpiece, ADAS senior sales expert, Dassault Systèmes, and Victor-Marie Lebrun, CATIA Systems engineering senior manager, Dassault Systèmes, in a joint statement.

Once the requirements are established, these hazardous scenarios, along with AI-oriented drivers, are used again in various use cases—from embedded code design to the simulation supporting a continuous integration/continuous delivery process, according to the Dassault Systèmes experts. Hardware-in-the-loop and vehicle-in-the-loop test platforms are then used for tuning and validation, while driver acceptance is tested on a driving simulator.

Coding Bad Driving Behavior

AI hardware and software developer NVIDIA offers DRIVE Sim, powered by its immersive 3D platform Omniverse. Marco Pavone, director of autonomous vehicle research at NVIDIA, knows all too well the challenges of replicating complex human driving behavior in software.

“A driver who is aggressive can be so in varying degrees, from cutting a car off once then driving safely, to consistently tailgating and cutting in,” Pavone explains. “Additionally, driving norms vary by region. In Pittsburgh, it’s customary for the first car in the oncoming left-turn lane to turn before opposing traffic crosses the intersection—a maneuver known as the ‘Pittsburgh Left’—even though the oncoming cars have the right of way.

“Traditional traffic models are based on a game-like logic, where the range of possible behaviors is limited,” says Pavone, which poses a challenge in duplicating the complexity of the real world.

For NVIDIA, use of AI is key. “We use techniques from generative AI and, in particular, diffusion models, to build models that accurately reflect how humans drive. We then use these models to generate new and diverse interactive scenarios,” says Pavone.

But that method has its own challenges. AI or machine learning demands real-world data—lots of it. NVIDIA’s strategy is to focus initially on lower-level autonomy—the ADAS and highway autopilot systems already in operation.

“We can leverage infrastructure and datasets already in use and readily available to advance these generative models for higher levels of autonomy,” says Pavone.

This is also the issue with other simulation-driven approaches. You can create any scenario desired in virtual reality, but if they’re not based on real-world events, their usefulness as training data is questionable.

Siemens offers Simcenter hardware and software to capture sensor data and recreate various scenarios, but the company is not in the business of capturing data and providing scenarios. So the data must come from the users—the autonomous car developers with sufficient resources to put test vehicles on the road.

The other issue is the sheer volume of computing power needed to run AI-based training and analysis. NVIDIA turns to high-performance computing (HPC) systems powered by its own RTX graphics processing units, specifically designed to process AI workloads. Mcity’s Liu is also fortunate enough to have access to the University of Michigan’s HPC resources.

Dassault Systèmes’ Malpiece and Lebrun believe a robust scenario editor and traffic flow modelers are the key components to testing and validation.

“Having an existing database of scenarios that can be simulated with the semantics to describe these is crucial in order to refine requirements and operational design domain,” they say.

Coding the Sixth Sense

Describing his challenge, Mcity’s Liu says, “There’s no mathematical formulation [for bad driving behavior], and if you don’t have concrete formulation, it’s hard for the engineers to solve it.”

Engineers use Mcity, a 32-acre driving ground, along with augmented reality, to verify how the autonomous vehicle reacts to safety critical scenarios. Image courtesy of Mcity, University of Michigan.

To make it easier to generate the so-called bad drivers, Pavone says, “NVIDIA Research is exploring how to harness the expressivity of large language models to generate complex interaction scenarios from user-specified text, such as those one might find on insurance claims or a police report.”

If coding bad virtual drivers is a challenge, coding good defensive driving is even more challenging.

“Human drivers have what you might call a sixth sense, a sense of anticipation. They have the creativity to resolve potentially dangerous situations beforehand. Think of the number of accidents people were able to prevent because of that sixth sense,” says Lohmann. “It’s a huge challenge to do that with software.”

More Dassault Systemes Coverage

Unified Modeling And Simulation (MODSIM) For Sustainable Product Development
Companies in all sectors must address the need for greater sustainability to meet customer demands. The development of new products necessitates rigorous testing and evaluation, while efforts must be made to decrease emissions and operate in a more economical fashion.
What’s New in 3D Motion Creator R2024x
3D Motion Creator provides simple functionality for kinematic and dynamic motion analysis of assemblies.
Industry Leaders Shift Strategies to Harvest AI Spring
AI-powered simulation and natural language input poised to become the norm.
Dassault Systèmes Releases SOLIDWORKS 2024
Company says new release enables users to create experiences smarter, faster, together.
Twin Tech and the Smart City: A Natural Pairing?
The question is whether the digital twin can take the smart city into the mainstream.
US DoE Puts $2.85 Million Toward 3D Printing Modular Wind Blades
Funding was awarded to Purdue University and its industry partners, including Thermwood, TPI Composites Inc. and Dassault Systèmes.
Dassault Systemes Company Profile

More NVIDIA Coverage

More Siemens Digital Industries Software Coverage

Siemens Digital Industries Software Company Profile

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


#27775