Skip to Content

Molinaroli College of Engineering and Computing

  • Xiaofeng Wang holding a drone overlooking the Swearingen courtyard

Enabling safe learning in autonomous systems

On March 18, 2018, a pedestrian was killed in Tempe, Arizona by an Uber self-driving car. While the car detected the pedestrian crossing the street five seconds before impact, it did not identify her as a human being or predict her path. This tragic incident raised concerns about safely testing new autonomous technologies. 

Since this past January, Electrical Engineering Associate Professor Xiaofeng Wang has been working toward advancing end-to-end safety and performance for learning-enabled autonomous systems in real-world settings. Wang is collaborating with researchers from the University of Illinois on a nearly $1.5 million, four-year National Science Foundation-funded research project. 

While learning-based autonomous systems have proven to be effective in perception and control, safety assurance is still a great challenge, especially when operating in unpredictable environments. And while some safety guarantees exist, they tend to be overly conservative, restricting capacity and performance. Wang’s team is working to develop an innovative learning-based systems that combines efficiency with guaranteed safety. He aims to create a novel framework for high-performance learning-enabled controllers that are reinforced with safety tube guarantees.

Wang says that explainable artificial intelligence can provide a set of techniques and processes to help understand how these models make decisions. But it is only a starting point.

“AI is widely used in different areas, but the question is, ‘If you have a high level of safety requirements, how can you guarantee that AI will not create something unpredictable, and what will you do if that happens?’ The project is based on this idea, and we are using an autonomous vehicle as a variable to prevent issues with a specific design,” Wang says.

The research will build upon Wang’s previous NSF-sponsored research project, which was also in collaboration with the University of Illinois, focused on engineering safety-critical cyber and physical systems. Wang developed a robust simplex architecture with a novel safety monitor, high-performance and high-assurance controller, and a decision logic that triggered the switch between the controllers. 

With his current project, Wang will implement a learning-enabled simplex framework to combat errors from learning. The goal of the learning-enabled simplex is to empower autonomous systems to function amid unpredictable changes and environmental hazards. The model will also actively acquire new data and insights from operating environments. Since the project began in January, Wang’s team has developed a basic framework, which they will perform simulations and test how much safety to guarantee. 

“While experiments showed that the robust simplex could effectively handle certain software and physical failures, it only focused on the control system and low-level functions,” Wang says. “The learning-enabled simplex focuses on the entire autonomy pipeline. We currently have a small part of the framework, including learning the parameter of the model predictive controller. This will allow the system to achieve high autonomy performance.”

According to Wang, there is no absolute guarantee that AI software used to control a car will always be safe. He aims to design AI systems cautiously to understand their behavior and adds that he will also include a monitoring and switching backup safety mechanism to determine when AI is not working correctly. The testing will be performed on a full-size car at the University of Illinois.

“We’re trying to balance the traditional and AI-based intelligent designs. The traditional design is safe but not fancy and smart-based. So, we’re trying to design the switching mechanism to determine the switch moment and safe actions,” Wang says. 

While Wang admits his idea and objective is straightforward, it is hard to predict when AI is not safe. To help alleviate the issue, his team is developing what they call a safety envelope. If a certain behavior is near a boundary and there is a tendency to go outside of the boundary, everything is still in the safe zone, but it is marginal. 

As the project continues, Wang’s team will hope to provide a comprehensive guide on how to design AI for high performance and how to design monitoring systems and hedging actions. 

“In order to design the monitoring system, you need to know how it works to understand the correct behavior and most importantly the safety parts, controller, and actions. The long-term objective is to figure out how we can design the safety,” Wang says. 

And while Wang’s research project will focus on improving safety for self-driving, the applications will be for all safety critical systems. “It covers almost all safety critical systems, including power, transportation and medical systems, healthcare and space exploration,” Wang says.

As the project moves forward, Wang is excited to help develop one of many safety critical systems that are being implemented into AI. He hopes that these systems can be combined with advanced AI without people worrying about safety.

“Some groups hesitate to use AI, even if there is only a one percent chance to risk safety, and that prevents it from being applied into important facilities,” Wang says. “This project might be able to help them to overcome this barrier.” 


Challenge the conventional. Create the exceptional. No Limits.

©