Posted in | News | Industrial Robotics

Controlling 6-DOF Robotic Manipulators with "Wand Mapping"

In a recent paper posted to the journal arXiv server, researchers introduced a new method called “wand mapping” for controlling six degrees of freedom (6 DOF) robotic manipulators. Their method uses a virtual rigid linkage between the user’s hand and the robot’s end-effector.

Controlling 6-DOF Robotic Manipulators with "Wand Mapping"
Study: Controlling 6-DOF Robotic Manipulators with "Wand Mapping". Image Credit: Sergey Ryzhov/Shutterstock.com

Background

Six DoF robotic manipulators are devices that move in three-dimensional (3D) space, with both translational and rotational motions. In the medical field, they can assist in assembling parts, performing surgeries, and assisting people with disabilities. However, controlling them is particularly challenging when they operate near the user and require direct visual feedback.

In such scenarios, position-to-position control methods are preferred over position-to-velocity methods due to their better performance and user preference. However, implementing position-to-position control faces two main challenges. Firstly, position-to-position interfaces are bulky. Secondly, direct mapping of a user’s body movements to the robotic end-effector is constrained by the user’s motion range.

About the Research

In this paper, the authors proposed an alternative approach using a lever, or a fixed-length "wand", to convert body translations into reduced rotations and body rotations into amplified translations. In this method, the user was equipped with a wand, and the virtual tip of the wand indicates the desired orientation and position of the robotic manipulator.

This mapping offers the ability to expand the translation workspace by leveraging the lever and body rotations. However, it reduces the rotational workspace, requiring the user's hand to rotate around the robot's end-effector, therefore enabling large translations proportional to the wand's length.

Furthermore, the system incorporated an OptiTrack motion capture system, a HoloLens 2 augmented reality (AR) headset, and a 7-DoF Kinova third-generation Ultra-Lightweight robot. The AR headset provided participants with visual feedback, displaying the desired end-effector location and the current position of the robot's end-effector in a semi-transparent overlay. The robot's end-effector was directed toward the desired location using a resolved rate controller.

Experimental Comparison

The researchers compared the wand mapping method with the traditional one-to-one direct mapping approach. In direct mapping, the displacement of the hand from the start of the experiment to the present moment directly corresponds to the desired position of the robot end-effector.

To guarantee consistent and comparable performances and hand trajectories, a series of targets was designed. Both mapping methods required the same starting and ending hand positions. These targets involved a 15 cm hand translation and rotations ranging from 0° to 45° around randomly selected, uniformly distributed 3D axes.

Participant Study

An experimental study was conducted with 20 participants aged between 18 and 55, who volunteered for the study. Each participant used both control modes and performed seven trials for each mode. Each trial required reaching a total of 30 targets—15 central points and 15 outer targets.

The targets alternated between central and outer locations, resulting in 15 back-and-forth trajectories. In each operational mode, the 4th and 6th trials were conducted without the AR visualization of the desired end-effector or the wand, while the remaining trials included full AR visualization.

Several metrics were measured for each trial, including task completion time, overshoot, hand motions, and motor control behaviors. Additionally, after all experimental sessions, participants completed questionnaires consisting of six questions about their cognitive load, intuitiveness, satisfaction, and preference for each control mode.

Research Findings

The results showed comparable performance and preference for both mapping methods, with no significant differences observed in task completion time, overshoot, hand motions, or motor control behaviors. All participants achieved a 100 % success rate across all targets in both modes, regardless of visualization.

Task duration and overshoot remained relatively consistent across trials, except for the 2nd and 4th trials, which showed the participants' efforts to speed up movements and adjust to visual perturbations, respectively. Overshoot decreased in the 6th trial without visualization, indicating improved integration with the mapping.

Hand motions were characterized by two distinct phases: a ballistic motion comprising 50 % of the duration and 80-95 % of the motion amplitude and an adjustment motion accounting for the remaining 50 % of the duration and 5-20 % of the motion amplitude.

Motor control behaviors demonstrated a linear spatial coordination pattern between rotation and translation errors, slightly deviating below the y=x curve. Additionally, the questionnaire responses revealed no significant differences between the two mappings, with participants generally perceiving both modes positively across all evaluated criteria.

Conclusion

In summary, the novel approach proved effective for 6-DOF robotic manipulation and could be suitable for scenarios such as using the head or trunk to control the robot or performing tasks that require large translations and minimal rotations. However, the researchers acknowledged that their method could not handle complex orientations with fixed translations.

They suggested incorporating hybrid modes, automatic handling, or velocity control to further improve performance. Additionally, they recommended hybrid modes that allow switching or combining different mappings for specific scenarios, such as navigating a wheelchair or grasping objects.

Journal Reference

Poignant, A., Morel, G., Jarrassé, N. Teleoperation of a robotic manipulator in peri-personal space: a virtual wand approach. arXiv, 2024, 2406, 9309. https://doi.org/10.48550/arXiv.2406.09309, https://arxiv.org/abs/2406.09309.

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.

Muhammad Osama

Written by

Muhammad Osama

Muhammad Osama is a full-time data analytics consultant and freelance technical writer based in Delhi, India. He specializes in transforming complex technical concepts into accessible content. He has a Bachelor of Technology in Mechanical Engineering with specialization in AI & Robotics from Galgotias University, India, and he has extensive experience in technical content writing, data science and analytics, and artificial intelligence.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Osama, Muhammad. (2024, June 20). Controlling 6-DOF Robotic Manipulators with "Wand Mapping". AZoRobotics. Retrieved on September 28, 2024 from https://www.azorobotics.com/News.aspx?newsID=15000.

  • MLA

    Osama, Muhammad. "Controlling 6-DOF Robotic Manipulators with "Wand Mapping"". AZoRobotics. 28 September 2024. <https://www.azorobotics.com/News.aspx?newsID=15000>.

  • Chicago

    Osama, Muhammad. "Controlling 6-DOF Robotic Manipulators with "Wand Mapping"". AZoRobotics. https://www.azorobotics.com/News.aspx?newsID=15000. (accessed September 28, 2024).

  • Harvard

    Osama, Muhammad. 2024. Controlling 6-DOF Robotic Manipulators with "Wand Mapping". AZoRobotics, viewed 28 September 2024, https://www.azorobotics.com/News.aspx?newsID=15000.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.