Mar 6 2013
This article was updated on the 3rd September 2019.
asharkyu / Shutterstock
For the digital acquisition of an image, there are two kinds of image sensors, which include CCD and CMOS. Both have a similar imaging quality. However, there is a significant difference in their key functions and other features.
The charge-coupled device (CCD) image sensor is based on an analog system. The charge reaching the CCD edge is transmitted to a digital or analog converter that converts the charge into a digital value for each pixel.
CMOS sensors are more efficient in generating a digital image and consume less power than a CCD. They can be larger than a CCD, enabling high-resolution images, and their manufacture is more economical when compared to a CCD.
The CMOS sensor pixels contain an amplifier and a photodetector. These image sensors do not directly measure color, with the amount of light simply being converted to a digital value.
Color filtering arrays may be used to determine colors. The most common type of color filter is the Bayer filter that filters the light entering each pixel so that only primary colors are detected.
The recreation of the full-color image is done by adding colors from each pixel for a full-color image. Since the reconstruction of images is needed and only one of the colors are known in each pixel, a part of the image fidelity is lost.
To overcome this, panchromatic filters like the human eye are being used that can detect light and dark as well as colors.
Image sensors are complicated optical sensors that enable robots to see their surroundings and identify objects. With camera algorithms and multiple cameras, it is possible to obtain data for 3D profiles. Solid-state imaging devices and a vacuum tube are available.
Imaging systems normally comprise linear and 2D photodiode arrays, charge injection devices, and charge-coupled devices. While using image sensors, material properties and lighting techniques must be considered.
Research
In 2012, Ph.D. candidate Joshua Schultz, under the guidance of assistant professor Jun Ueda from the George W. Woodruff School of Mechanical Engineering at the Georgia Institute of Technology, used piezoelectric materials to replicate the muscle movement of the human eye to control camera systems such that robotic operation is improved.
This muscle-like action will enable robotic tools to be safer for robotic rehabilitation and MRI-guided surgery.
The main part of the novel control system is a piezoelectric cell actuator that will use innovative biologically-inspired technology allowing the robotic eye to move like the human eye.
This high-speed lightweight approach includes a camera positioner with a single degree-of-freedom that can be used for demonstrating and understanding the control and performance of this biologically-inspired actuator technology.
This novel technology makes use of less energy than traditional camera positioning mechanisms and enables more flexibility.
An interesting example of using camera technology for the development of robots comes from Professor Heinz Ulbrich at the Institute of Applied Mechanics at the Technical University of Munich, who is working on a novel piece of technology that explores the idea of a camera being able to mimic the movement of a human eye:
Superfast Robotic Camera Mimics Human Eye
This novel technology makes use of less energy than traditional camera positioning mechanisms and enables more flexibility.
An interesting example of using camera technology for the development of robots comes from Professor Heinz Ulbrich at the Institute of Applied Mechanics at the Technical University of Munich, who is working on a novel piece of technology that explores the idea of a camera being able to mimic the movement of a human eye:
Current Applications
Some of the current applications of robots with image sensors are in the following areas:
- Automotive inspection
- Electronics, semiconductors, and components
- Flat-panel inspection
- Food and beverages
- Global security solutions
- Medical devices and patient diagnostics
- Packaging and printing
- Pharmaceuticals
- Surface quality inspection
- Traffic enforcement
Future Developments
Machine vision has substantially matured over the last 15 years, becoming an essential tool for manufacturing automation.
Robots equipped with image sensors can identify and inspect components, precisely measure dimensions, or guide machines or other robots during assembly operations and pick-and-place operations.
Vision guidance has been very effective in placing surface mount components on PCBs, though it has not yet been proven to be effective for detecting defective parts on an existing production line.
The future will see a lot more advancements in the field and robots with image sensors and vision systems performing complex operations will become more common.
It will be worth using robots with image sensors in manufacturing operations if they are quick enough to keep up with high production rates. They must be easy to use and intuitive. They must be intelligent enough to deal with part-to-part or other process variations.
In a 2017 report, it was demonstrated that robotic applications are currently under development and that in the next 10 years, new innovations will improve robotic sensors. The capabilities of sensors are expected to be used in robotics, including sensor-enabled robots that are smarter and better than their predecessors.
Vision systems in robotics are expected to reach a staggering market of $5.7 billion by 2027. On the other hand, force-sensing will reach about $6.9 billion and multiple sensor use in domestic robots is projected to reach $3.6 billion.
References