Jan 22 2021
Initially earmarked for covert military operations, unmanned aerial vehicles (UAVs) or drones have since gained tremendous popularity, which has broadened the scope of their use. In fact, "remote pilot" drones have been largely replaced by "autonomous" drones for applications in various fields.
One such application is their usage in rescue missions following a natural or man-made disaster. However, this often requires the drones to be able to land safely on uneven terrain--which can be very difficult to execute.
"While it is desirable to automate the landing using a depth camera that can gauge terrain unevenness and find suitable landing spots, a framework serving as a useful base needs to be developed first," observes Dr. Chinthaka Premachandra from Shibaura Institute of Technology (SIT), Japan, whose research group studies potential applications of camera-based quadrocopter drones.
Accordingly, Dr. Premachandra and his team set out to design an automatic landing system; they have detailed their approach in their latest study published in IEEE Access.
To keep things simple, they upgraded a standard radio control (RC)-based drone with necessary hardware and software and equipped it with a simple 2D camera for the detection of a symbolized landing pad.
"The challenges in our project were two-fold. On the one hand, we needed a robust and cost-effective image-processing algorithm to provide position feedback to the controller. On the other, we required a fail-safe switch logic that would allow the pilot to abort the autonomous mode whenever required, preventing accidents during tests," explains Dr. Premachandra.
Eventually, the team came up with a design that comprised the following components: a commercial flight controller (for attitude control), a Raspberry Pi 3B+ (for autonomous position control), a wide-angle modified Raspberry Pi v1.3 camera (for horizontal feedback), a servo gimbal (for camera usage control), a Time-of-Flight (ToF) module (as feedback sensor for the drone height), a multiplexer (for switching between manual and auto modes), an "anti-windup" PID controller (for height control), and two PD controllers (for horizontal movement control).
In addition, they implemented an image-processing algorithm that detected a distinctive landing symbol (in the shape of "H") in real time and converted the image pixels into physical coordinates, which generated a horizontal feedback.
Interestingly, they found that introducing an adaptive "region of interest" helped speed up the computation of the camera's vertical distance to the landing symbol, greatly reducing the computing time--from 12-14 milliseconds to a meagre 3 milliseconds!
Following detection, the system accomplished the landing process in two steps: flying towards the landing spot and hovering over it while maintaining the height, and then actually landing vertically. Both these steps were automated and therefore controlled by the Raspberry Pi module.
While examining the landing, the research team noticed a disturbance in landing behavior, which they attributed to an aerodynamic lift acting on the quadrocopter.
However, this problem could be overcome by boosting the gain of the PID controller. In general, performance during the landing process indicated a properly functioning autonomous system.
With these results, Dr. Premachandra and his team look forward to upgrading their system with a depth camera and thus enabling drones to find even more applications pertaining to daily life. "Our study was primarily motivated by the application of drones in rescue missions--But it shows that drones can, in future, find use in indoor operations such as indoor transportation and inspection, which can reduce a lot of manual labor," concludes Prof. Premachandra.