Posted in | News | Machine-Vision

Researchers Incorporate Human-Like 3-D Visual Perception in Robots

The research team at Purdue University led by Zygmunt Pizlo has been conducting a critical research at the Visual Perception Lab. The team’s newly developed a technology will be licensed and commercialized shortly.

A robot named Èapek has been developed that can observe and mimic every movement and activity of the members of the research team. The major objective in this research is simulating robot’s visual perception that makes the robot "see" like humans.

Purdue’s professor at the Department of Psychological Sciences, Pizlo said that the most challenging task in artificial intelligence and robotics is to make the robots and other machines capable of 3-D visualization similar to humans. So far, robotic vision researches have been conducted on recording and analyzing 2-D images, but have not focused on 3-D visual perception. This approach features the ability of robot to comprehend the anterior 3-D scene in sync with an object present in the field of view.

With over three decades of expertise, Pizlo has been working with Postdoctoral research associates Yunfeng Li and Tadamasa Sawada. Sawada comments that humans can carry out computationally complex cognitive functions. Making the robot capable of accomplishing such intricate functions is a real challenge.

Figure-ground perception can be defined as the human visual system’s ability to simplify a picture or scene into a main object, followed by shifting other things into the background cognitively.

According to Pizlo, multiple cameras combined with sensors and laser range finders have been employed in conventional robotic vision technology for detection. Although capable of basic object recognition, the existing systems do not mimic the human-based 3-D capabilities.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Choi, Andy. (2019, February 20). Researchers Incorporate Human-Like 3-D Visual Perception in Robots. AZoRobotics. Retrieved on December 03, 2024 from https://www.azorobotics.com/News.aspx?newsID=2772.

  • MLA

    Choi, Andy. "Researchers Incorporate Human-Like 3-D Visual Perception in Robots". AZoRobotics. 03 December 2024. <https://www.azorobotics.com/News.aspx?newsID=2772>.

  • Chicago

    Choi, Andy. "Researchers Incorporate Human-Like 3-D Visual Perception in Robots". AZoRobotics. https://www.azorobotics.com/News.aspx?newsID=2772. (accessed December 03, 2024).

  • Harvard

    Choi, Andy. 2019. Researchers Incorporate Human-Like 3-D Visual Perception in Robots. AZoRobotics, viewed 03 December 2024, https://www.azorobotics.com/News.aspx?newsID=2772.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.