LEARN MORE


A Brief Look at Vision

What do your mother, hash browns and blobs have to do with machine vision? Plenty.

Picture a bin full of light bulbs. The goal is to use a six-axis robot to pick up each bulb and deposit it elsewhere. “It’s a fragile part, but not that difficult to pick,” says Karl Gunnarsson, vision manager at SICK Inc. (Minneapolis, MN; www.sickusa.com), a provider of products including ultrasonic sensors, inductive, capacitive or magnetic proximity sensors, magnetic cylinder sensors, photoelectric switches, contrast and luminescence sensors, color sensors, fork sensors, light grids, distance measurement sensors, data transfer systems, vision sensors, position finders, encoders, and positioning drives. “If you are close to the sphere, you could use a little suction cup to pick it up. The challenge would be that when you come in with the robot you don’t want to hit another light bulb.” Now, if you have 1,000 light bulbs in that bin and the robot breaks one, that in itself may not be a big deal. “But imagine all of the little glass pieces,” Gunnarsson says. That could be a big deal.

As regards vision systems, the detection of the light bulb is a manageable task when a 3D system is deployed. But Gunnarsson points out that there are some tasks that are better suited for automated applications. That is, it might be better picking up something that’s less breakable than light bulbs at random in a bin. A more fundamental question might be: Why use vision at all?

Solving Problems.
Gunnarsson says, “You should use vision if you have a problem—pain—and you can define the application.” Which sounds simple enough. But he means really define it: “Explain it to your mother.” Otherwise using vision (or any other technology, for that matter) might not be a solution to anything other than creating more pain. By having an understanding of the problem, determining the best way to solve it is somewhat simpler. For example, he says that it could be that instead of deploying a vision system, a sensor might do the job just as well—and it would probably be simpler to do. “If you have repeatable part presentation and requirements for inspection that aren’t all that much—as in the part is there or not there—and it is the same part over and over again, it is probably better to do it with a sensor than with vision. You just take a sensor, a screwdriver, attach the sensor to a bracket, tweak it . . .”

But there are tasks for sensors, especially when there is a degree of randomness involved in the process: things aren’t repeatedly lined up and you need to detect features on the objects. Often, people use 2D vision to complete the tasks. Here the routine is along the lines of getting a camera, lens, and lights, putting the camera in position, then lighting the area of interest so that there is a good contrast between the part and the background. Then software tools are used to find “blobs” (yes, that is actually the technical term used in the detection of points or regions on objects) and edges. Fairly straight-forward. Gunnarsson says, however, that there are many people who are trying to do 3D jobs with 2D systems by, for example, doing all manner of things with the lighting in order to achieve the kind of depth that we humans take for granted but which a camera doesn’t readily realize.

See There.
Which brings us back to the bins. Part of “seeing” is being able to get contrast. So say you have a bin of gears. It would be difficult for a 2D system to be able to detect a single gear because there isn’t the kind of contrast that would provide the necessary depth or volume. Gunnarsson uses another example to explain this: Say you’re in the food industry and you’re making hash brown patties. If you have a conveyor belt running and there are two patties that are precisely stacked, with 2D it isn’t evident that that is the case. Which is why you need 3D. The 3D system builds up a topographic map through various methods, ranging from moiré imaging to triangulation. While a 2D image is something that people can see, the information from the 3D system is something that is understandable by a computer, such as coordinates. Those coordinates can then be used by the robot controller to position the end effector higher in order to pick off the top-most hash brown patty rather than driving it down to the position where the patty would be if there wasn’t the stack, which would, of course, have a deleterious effect on one of the hash browns.

Although 2D is well accepted in many applications, Gunnarsson says that work needs to be done convincing people of the value of 3D. It isn’t necessarily more difficult, he says, just different. But if it is the appropriate solution to the problem—remember, one that you can explain to your mother—then it is the right tool to use.

One more bit of practical advice.

Historically, robot/vision people in the auto industry have been talking about the connecting rod in a bin problem: How do you see and get just a single rod when they’re randomly placed in a bin? So all manner of sophisticated systems have been developed to accomplish this. Gunnarsson has a more pragmatic solution: “The best bin picking system you could create would be to have a magnet of the appropriate size. It would pick up a single rod. Can you do it with that?” Or maybe you need a vision system to get the job done.