Posted in | News | Medical Robotics

Researchers Employ AI Models to Improve Sky Safety

In the upcoming years, there will likely be a significant rise in the number of autonomous drone aircraft operating in uncontrolled airspace beneath 400 feet of altitude. By 2027, experts predict that the US will have a fleet of around 1 million commercial unmanned aircraft systems (UAS) engaging in tasks such as package delivery, traffic monitoring, and emergency aid.

Researchers Employ AI Models to Improve Sky Safety

Image Credit: John Hopkins University

A team of researchers, headed by Lanier Watkins and Louis Whitcomb from the Institute for Assured Autonomy, has utilized artificial intelligence to devise a system aimed at enhancing the safety of drone traffic. This system replaces certain human-in-the-loop processes with autonomous decision-making.

Their findings were published in IEEE Computer.

We wanted to see if different approaches using AI could handle the expected scale of these operations in a safe manner, and it did. Our simulated system leverages autonomy algorithms to enhance the safety and scalability of UAS operations below 400 feet altitude.

Lanier Watkins, Principal Professional Staff, Applied Physics Laboratory, Johns Hopkins University

The Hopkins team examined the effects of autonomous algorithms in a simulated 3D airspace to address the issue of growing UAS traffic. The team was aware from earlier studies that collision avoidance algorithms significantly decreased accidents. They discovered that including strategic deconfliction algorithms—which regulate traffic scheduling to prevent collisions—made things considerably safer and almost eliminated airspace mishaps.

Additionally, the researchers integrated their simulator with two realistic features. "Noisy sensors" replicate the unpredictability of real-world conditions, enhancing the system's adaptability. Meanwhile, a "fuzzy interference system" calculates the risk level for each drone, considering factors such as proximity to obstacles and adherence to the planned route.

Watkins and Whitcomb assert that these techniques enable the system to autonomously make decisions, thereby preventing collisions

Our study considered a variety of variables, including scenarios that involve ‘rogue drones’ that deviated from their planned routes. The results are very promising.

Louis Whitcomb, Professor, Department of Mechanical Engineering, John Hopkins University

For a more accurate portrayal, the team intends to improve its simulations by adding dynamic barriers like weather and other real-world elements.

The study, according to Watkins, draws on more than two decades of research on strengthening the safety of the National Airspace System of the United States carried out at the Johns Hopkins University Applied Physics Laboratory.

Watkins concluded, “This work has been investigated through simulating performance in environments and systems that are being considered for deployment by third parties in future airspaces, as well as in the academic and basic research IEEE and ACM communities. This work helps researchers understand how autonomy algorithms that protect airspace can behave when faced with noise and uncertainty in 3D-simulated airspace and underscores the need to continuously monitor the results from these autonomous algorithms to ensure they have not reached potential failure states.

Journal Reference:

Watkins, L, et al. (2023). The Roles of Autonomy and Assurance in the Future of Uncrewed Aircraft Systems in Low-Altitude Airspace Operations. IEEE Computer. doi.org/10.1109/MC.2023.3242579

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.