I find the claim “No Programming Required” bogus. Simply put, automated devices must be told what to do. Technically, that’s programming.
A more believable claim for me would be that “programming isminimized.” Further, whatever programming there is, it is “simple.” That means there are no arcane programming languages and no complicated menus—even if they are supposedly “intuitive.”
Eyebot, an intelligent vision system fromSIGHTech Vision Systems, Inc. (San Jose, CA), makes the claim that “no programming is involved.” Skepticism notwithstanding, it comes mighty close to that when learning to inspect products.
Eyebot can learn to distinguish “good” parts from “bad” parts. There is no personal computer (PC), frame-grabber, complicated interface, or software to hassle with. Instead, you train it by turning a rotary selector switch through five steps: erase, view, learn, test, and run. At each step, using Up/Down and Yes/No buttons, you set such parameters as the size and location of the view area, video threshold, the reference image of a good product, and the decision threshold for accepting only good products.
|Not quite seeing eye-to-eye, these two Eyebots (front and back views) show both the user and machine interfaces for installing this vision system. The front panel (top) shows the control knob and four buttons to operate Eyebot. Turn the rotary switch to “learn,” show Eyebot good product during actual inspection, and then turn the switch to “run.” That’s it for programming Eyebot. The back panel (bottom) shows how Eyebot talks to the world. It has a 25-pin external port with two relays that switch 3-amps at 60 volts and it has a serial port (DB-9). The monitor and camera connect via standard BNC cables. (Source: SIGHTech Vision Systems, Inc.)|
Eyebot learns by observing good products as they pass its viewing area. Learning is a cumulative process; through its neural net and fuzzy logic algorithms, the more you teach Eyebot, the better it gets. That is—and as with so many other things in biological systems (read “life”)—repetition reinforces learning.
Moreover, if you train Eyebot under varying operational conditions (lighting, product positions and orientations, or amounts of background clutter), it will accommodate those conditions later, when it is running. When Eyebot sees something it did not learn initially, it can send an alert by energizing two optically isolated relay switches (3 amps at 60 volts). These outputs can go directly to an automated device, such as an ejector. Compare this to a conventional vision system, which typically sends an output to a PC, which in turn sends an output to a programmable controller.
Teaching Eyebot takes as long as it takes to pass as few as a couple of dozen “good” products past its camera. It can learn up to 13 million features per second and make up to 3,600 decisions per minute, which comes out to about 60 parts per second. Plus, it can learn over 100,000 colors and millions of shapes, thereby detecting color or shape defects. (When I was given a demo, Eyebot had no problem picking out a round yellow hard candy from a handful of hard candies of different shapes and sizes, some of which were covering the yellow candy it was trained to recognize.)
|The entire Eyebot vision system fits in a ruggedized box about the size of a desktop telephone, weighs 2.2 pounds, and costs $4,995. The system does not require a PC, so there are no hardware drivers or software tables to upgrade when changing a peripheral computer’s operating system or swapping out a programmable controller. (Source: SIGHTech Vision Systems, Inc.)|
Costing $4,995, the entire Eyebot vision system fits in a ruggedized processor box measuring 5.5 inches wide by 9 inches deep by 1.6 inches high, and weighing 2.2 pounds. Any video camera, even a camcorder, can be connected to Eyebot. The same is true for its monitor; any monitor will do—even a television set.
Who says a joystick can’t be a business expense?
|Supposedly, this isn’t programming; it’s “configuring.” In the background is an example of the spreadsheet you use to tell the Cognex In-Sight vision system what features to look for, what to count, what to do when an object is out of tolerance, and to input other configuration data. To make this job easier, the spreadsheet is transparent, making the image of the object in question visible during configuring. The screen shot in the foreground is what you’d see when the vision system is running.(Source: Cognex Corporation)|
Another vision system company making similar claims—”no programming,” “no PC,” and under $5,000—is Cognex Corp. (Natick, MA) for its In-Sight 2000 vision sensor. Cognex has replaced complex programming with a spreadsheet for configuring vision applications—and a hand-held control pad for navigating around the spreadsheet. This control pad is not quite the joystick you—okay, your kids—would use for 3D games, but it does make “spreadsheeting” a heckuva lot more fun. And a lot easier.
This not-programming approach relies on the familiarity of spreadsheets. It also layers a sophisticated point-and-click interface on top of what’s typically drudge work (programming) in the vision-system world. Last, it accepts the reality that machine vision is based on data, and spreadsheets are perfect for working with large data sets.
|This is not your kid’s joystick. It’s your kid’s daddy’s joystick—the control pad for navigating around the spreadsheet used to program the Cognex In-Sight vision system. Using this control pad, you select tools and parameters through a series of menus and dialog boxes. In-Sight will automatically enter numerical data that corresponds with these selections into the spreadsheet cells. Linking the cells together creates programs for In-Sight to follow when locating or measuring parts or other functions.(Source: Cognex Corporation)|
To not-program In-Sight, you use the control pad to first select functions, tools, and parameters from a series of menus and dialog boxes. For example, one spreadsheet function can operate on values over time, as well as execute depending on a predefined condition. In-Sight includes a library of Cognex image processing and analysis tools, such as PatFind, which can locate parts despite changes in part orientation, size, or appearance. PatFind makes training the vision system simple: just place the subject part under the In-Sight camera and acquire the reference image.
The next step in not-programming, again using the control pad, is to connect spreadsheet cells together to perform tasks, such as locating objects or measuring parts. To make all of this not-programming easier, the In-Sight spreadsheet is transparent; that is, the part image, as seen by the vision system, is visible during the “no-programming” programming you’re doing in the spreadsheet.
|By buying vision-systems-in-a-box—this Cognex In-Sight system is not much larger than your shoe and it doesn’t require a PC or ancillary frame grabber—machine vision implementations become much simpler and less expensive. (Source: Cognex Corporation)|
Besides how you program these devices, the other trend in vision systems is to make them as self-contained as possible. In-Sight fits in a ruggedized processor box about the same size as the enclosure for SIGHTech’s Eyebot (In-Sight is about two inches longer), namely about the size of a desktop telephone. Both processor boxes weigh about the same. In-Sight has two built-in inputs and two outputs (10 to 24 vDC).
As with SIGHTech, Cognex apparently has also gotten flak from users who have just gotten fed up with the vagaries in Microsoft Windows. Because neither Eyebot nor In-Sight requires a PC, there are no hardware driver changes to make when the operating system of an associated PC is upgraded. There are also no software incompatibilities to contend with. Nor do tables have to be updated when a programmable controller gets swapped out.
However, unlike Eyebot, In-Sight does require a special digital camera. The one that’s included in the In-Sight kit can acquire 640 x 480 images at 30 frames per second; its charge-coupled device can progressively scan, thereby capturing freeze-frame images of moving parts without requiring expensive strobe lighting.