Skip to Content

College of Engineering and Computing

  • Mechanical Engineering Associate Professor Yi Wang

Developing self-learning ground robotics for controlling combat mechanical systems and solving problems

A plug and play (PnP) tool in a military system frequently refers to a solution that enables combat capabilities without manual installation, such as Positioning, Navigation and Timing (PNT) in the Army, which impacts a soldier’s ability to shoot, move and communicate. These tools are essential for artificial intelligence applications, and Mechanical Engineering Associate Professor Yi Wang has been researching how robotics systems can control mechanical systems on an AI-enabled PnP platform.                

Wang’s research, “A plug-and-play tool based on online machine learning for real-time monitoring and control of mechanical systems,” began in July 2018 and is funded through a $417,000 grant from the Army Research Laboratory (ARL).

“All the military agencies are looking for technologies to combine fundamental research with real systems, and AI is a good merge of both,” Wang says. “My background is in engineering mathematics, and I think that works well with AI and machine learning. I am also interested in applied research, and AI can come up with new applications for real-world problems. It’s a challenging but promising area.”

Wang has developed and demonstrated a PnP platform of ground robotics, based on online neural network learning, which classifies unlabeled real-world data. This provides real-time monitoring, early predictions and indications, and control of mechanical systems to address a current need for fault detection and compensation. The online machine learning techniques will also lead to enhanced intelligence, anomaly mitigation and resilience of autonomous systems.

Budgetary issues have placed pressure on military and government agencies to extend the lifespans of combat systems. This has led to a greater emphasis on next-generation data analytics and AI to enable real-time situational response using Health and Usage Monitoring Systems (HUMS) data, which ensures the reliability and safety of vehicles. Wang hopes his research can lead to an innovative real-time machine learning framework that can adapt to any vehicle’s HUMS interface, while providing a reconfigurable control system to mitigate decreased system operations. According to Wang, the technology would be valuable to the Department of Defense, Department of Homeland Security, NASA and civilian applications.

The performance of any Army vehicle degrades and becomes inferior due to long-term use, and the maintenance is usually not condition-based but the more expensive schedule-based,” Wang says. “We are developing an on-board sensor and processing unit that indicates if something goes wrong during vehicle operation. Since fleets have many vehicles with different usages, doing maintenance on the same schedule is a waste of resources.” 

Prior to beginning his current research, Wang developed technological elements of military HUMS data in a modular environment. Some of the elements included neural network learning, model predictive control and an embedded computing platform. This led to further examination into predicting in-process performance and reconfiguring controls to complete missions, despite degraded performance. The current research has allowed Wang and his team to optimize the AI-embedded computing platform in performance and functionality, increase efficiency and reliability of the neural network model, develop fault detection for rapid responses, and reconstitute controls to prevent deviations. The technology is intended to be integrated with the Army’s designated workflow of HUMS analysis to enable real-time situational responses and system resilience.

Through this project, I have been introduced to robotic platforms, computer vision, control methods and artificial neural networks. Beyond technical engineering, Dr. Wang has given our team a thorough understanding of how to present our research.

– Graduate student Benjamin Albia

The research has also worked on optimizing and giving the PnP capability of perception and decision-making to the robotics systems equipped with sensors and cameras. Based on captured images, the robot can also automatically extract the positioning information and determine the orientation and shortest distance from its location to the destination in real time.

“The perception part is using different sensors to localize the robot. For example, we can use the cameras and another sensor called a LiDAR that emits pulsed light waves and receives the reflection to determine if there is an obstacle ahead and relative position between the robot and the obstacle,” Wang says.

Decision making is the second part of the optimization. Using lines on the ground to prescribe a trajectory path, the robot can determine any deviations to the destination point using image analysis.

“An onboard controller and computing unit allow the robot to calculate and command the wheel speed in real time in order to follow the designated path,” Wang says. “The sensors, image analysis and how the computer interprets the signal using AI would be packaged within software that can be delivered and installed on any ground robot AI-enabled hardware.”

Another focus of Wang’s research is fault detection and control reconstitution during operation. One wheel on the ground robot is purposely misaligned to create an anomaly in the system. By reading sensor data, it can recognize a problem with the wheel and develop a new control signal to adjust and fix the problem.

“The robot has the brain power to detect a system anomaly and actively think of an approach to address the problem. This will allow the robot to continue to achieve the mission in the presence of anomaly,” Wang says.

Wang’s research is scheduled to end next month; however, he has requested a seven-month extension due to COVID-19 delays as well as the need for further collaboration with ARL personnel on a joint program to transition the technology towards structurally reconfigurable unmanned aircraft systems. He also wants to determine the possibility of restoring additional functionalities by reconfiguring the controller. “We want to further refine and optimize the controller in terms of response time and aims to restore the operational capabilities by injecting new control signals at the rate of 0.02 seconds,” Wang says.

In addition to defense needs, rescues and agriculture are two possible civilian applications once the robots are given the power to make decisions.

“Robots could find locations in the shortest path with the minimum amount of risk to rescue individuals,” Wang says. “Regarding agriculture, robots can design the trajectory to take samples of a large area to determine how crops are growing and any damage from insects or other factors, a process known as path planning. Once you give robots the power to make decisions, they can be used in many areas.”


Challenge the conventional. Create the exceptional. No Limits.

©