Artificial intelligence (AI)-powered sophisticated algorithms may eventually allow drones to pilot themselves without the need for human assistance, utilizing visual cues to guide them from one location to another.
This is the goal of a two-year project being headed by academics from the University of Missouri and funded by a $3.3 million grant from the US Army Corps of Engineers' premier research and development institution, the Engineer Research and Development Institution (ERDC).
Principal investigator on the project and Curators' Distinguished Professor of Electrical Engineering and Computer Science, Kannappan Palaniappan, stated that in situations where there is an interruption or loss of signal from GPS navigation, such as after a natural disaster or in military situations, the ability to operate autonomously becomes critical.
This typically occurs in the aftermath of natural disasters, occlusions in the built environment and terrain, or from human-involved intervention, most drones operating today require GPS navigation to fly, so when they lose that signal, they are not able to find their way around and will typically just land wherever they are. Unlike ground-based GPS navigation apps, which can reroute you if you miss a turn, there’s currently no option for airborne drones to re-route in these situations.
Kannappan Palaniappan, Professor and Principal investigator, Department of Engineering and Computer Science, MU College of Engineering
Currently, to maintain a drone within the drone pilot's line of sight and avoid collisions with objects such as buildings, trees, mountains, bridges, signs, or other notable structures, the drone pilot needs to operate the device manually and possess a high degree of situational awareness.
Palaniappan and colleagues are currently working on software that will enable drones to fly autonomously, independently detecting and interacting with their surroundings as they accomplish predetermined goals or objectives. This will be accomplished through the use of optical sensors and algorithms.
We want to take the range of skills, attributes, contextual scene knowledge, mission planning, and other capacities that drone pilots possess and incorporate them along with weather conditions into the drone’s software so it can make all of those decisions independently.
Kannappan Palaniappan, Professor and Principal investigator, Department of Engineering and Computer Science, MU College of Engineering
Advancing Intelligent Scene Perception
Drones can already carry out some restricted advanced-level tasks, including object detection and visual recognition, thanks to recent developments in visual sensor technology such as light detection and ranging or lidar, and thermal imaging.
Drones could also aid in the development of 3D or 4D advanced imaging for mapping and monitoring applications when paired with the team's algorithms, which are driven by machine learning and deep learning, two subsets of AI.
As humans, we have been incorporating 3D models and dynamical knowledge of movement patterns in our surroundings using our visual system since we were little kids. Now, we are trying to decode the salient features of the human visual system and build those capabilities into autonomous vision-based aerial and ground-based navigation algorithms.
Kannappan Palaniappan, Professor and Principal investigator, Department of Engineering and Computer Science, MU College of Engineering
Developing advanced imagery capabilities requires computer-related resources like processing power, memory or time. That capability is beyond what is currently available through the software system typically available on board a drone.
To find a possible solution, the MU-led team is looking into ways to take advantage of the advantages of cloud, high-performance, and edge computing techniques.
Palaniappan said, “After a severe storm or a natural disaster, there will be damage to buildings, waterways, and other forms of infrastructure, and a 3D reconstruction of the area could help first responders to government officials understand how much damage has taken place. By allowing the drone to collect the raw data and transmit that information to the cloud, the cloud-supporting high-performance computing software can complete the analysis and develop the 3D digital twin model without the need for additional software to be physically installed and accessible on the drone.”
Prasad Calyam, Filiz Bunyak, and Joshua Fraser are all part of the MU team. The team also has researchers from Saint Louis University, the University of California-Berkeley, and the University of Florida.