Robots use sensors to get information about their surroundings. In general, a sensor measures a characteristic of the environment and makes a proportional electric signal. Many of robot sensors mimic our own senses, but robots can also sense that things we can’t, like magnetic fields or ultrasonic sound waves. Robotic light sensors come in many different forms–photoresistors, photodiodes, phototransistors–but they all have roughly the same result. When light falls on them, they respond by creating or changing an electric signal. A filter in front of a light sensor can be used for a selective response—so the robot "sees" only a certain color. Light sensors can also be used for simple navigation. For example, a robot can use them to follow a white line. Other robots navigate using infrared light (the same invisible light used in your TV remote control). The robot sends out a beam of infrared light, some of which bounces off of an obstacle and returns to a light sensor on the robot.
For more complex vision systems, light sensors are not enough. Robots that find and remove imperfect products from a conveyor belt need to be able to resolve complex, changing images quickly. In these situations, the image from a camera "eye" must be analyzed by a computer program. Robotic vision is one of the
greatest challenges for engineers. It is difficult to program a robot to
see what’s important and ignore what isn’t. A robot has trouble
interpreting things like glare, lighting changes, and shadows. Also, for a
robot to have depth perception, it needs stereoscopic vision like our own.
Resolving two slightly different images to make one 3D image requires
large amounts of computer memory. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
©2000 The Tech Museum of Innovation |