banners_MCA_description.jpg
banners_MCA_Events_Calendar_btn_gray.jpg
banners_Free-Webinars_btn_gray.jpg
banners_Become_Member_btn_gray.jpg
banners_Advertise_w_MCA_btn_gray.jpg
banners_Market_Data_Trends_btn_gray.jpg
banners_facebook-mca-bluetext.jpg
banners_twitter-mca-bluetext.jpg
banners_linkedin-mca-bluetext.jpg
banners_RIA-Text-Logo_75.jpg
banners_AIA_Logo_tag_100.jpg
banners_MCO_New_Prod_Showcase_728x90-v2.jpg
spacer

Motion Sees the Benefits of Vision

Vision-guided motion and motion-guided vision increasingly find applications in science and industry.

By: Kristin Lewotsky, Contributing Editor
(posted 07/21/2014)

Some wag once said that the human eye is the equivalent of a bi-convex lens backed by a Cray supercomputer. The task of capturing and processing image data to recognize shapes and determine a course of action may seem easy enough to you but it can be enormously difficult for a machine. For a long time, techniques like vision-guided motion were the province of research. Today, a number of commercial vendors and integrators produce systems that deliver just these capabilities. The approach can introduce significant benefits but only if properly executed. For the Figure 1: Vision-enabled motion system simplifies 3-D modeling of automotive castings. (Courtesy of Interactive Design)end user, it's a matter of understanding the possibilities, challenges, and the limitations.

Vision and motion technology can be roughly classed into two categories–motion-guided vision, in which one or more motion axes positions image sensors used for identification and inspection; and vision-guided motion, in which camera input guides actuators to the correct position. Of the two, motion-guided vision is far more common. A system might use a single axis of motion--a servo motor or a stepper actuator--to move a sensor over a part or a part under a sensor to build up a 3-D image, slice by slice. This type of application can work particularly well to monitor non-flat parts like automotive castings (see figure 1) that may not be stable when a conveyer stops suddenly for an inspection step. It can also be used to reposition inspection cameras for different sizes and types of parts. "When the next part comes through it may be a different size or the features that we need to inspect are bigger or smaller, so we need to move our cameras accordingly," says Nathan Maholland, Sales Manager at Interactive Design Inc. (Lenexa, Kansas).

Motion axes can also be used to position a part for more conventional inspection. Interactive Design uses a threaded pin to tap holes in the bottom of small plastic devices used to make models for students. Once the part is mounted on the pin, the system uses the pin to turn the part to assess quality. "The cameras identify where the part is, how it’s being held, and then they rotate the part to four different positions for inspection," says Maholland. After the parts complete the inspection process, they're laser etched with the part number and sorted into one of 16 bins.

Motion-guided vision
At DW Fritz Automation (Wilsonville, Oregon), as much as 90% of the machines integrate vision and motion. In particular, the company supplies a number of vision-guided applications for the semiconductor industry, such as wafer placement and robotic pick and place. A camera captures images that are processed by the computer, which sends commands to the motion control system to close the loop locally on the motion controller and arrive at a specified location.

Figure 2: Vision-guided motion allows motor system to insert four electronic wires per second. (Courtesy of DWFritz Automation)Building a successful integrated system requires careful design up front. How precisely do you need to assemble or place the part? That will drive your choice of camera resolution, optical performance, and the mechanical characteristics of the motion system. Speed likewise comes into play, ranging from camera speed to speed of the motion.

For decades, the dominant paradigm in machine building was for the mechanical engineers to design a machine, toss it over the wall to the electrical engineers to add motors and drives, then in turn pass it to the controls engineers who had to figure out how to make it all work. When it comes to combining motion and vision, vertically integrated design is the key. That holds for even a relatively minor component like the gripper elements added to the end of a robotic handling arm. Use the conventional discrete design method and you might still get a system that works – just not well. "The complete system works synergistically," says Hob Wubbena, Vice President, Universal Robotics Inc. (Nashville, Tennessee). "One cannot be developed without considering the others."

It's important to remember that for these kinds of specialized tasks, the throughput levels are quite a bit different than for conventional motion systems like fill-on-the-fly pharmaceutical packaging equipment. At DWFritz, for example, the team has built vision-guided motion systems that insert wires into circuit boards at the rate of four wires per second (see figure 2). Each wire is roughly 4.5-mm long and 180-µm wide, inserted into 254-µm-diameter holes through two circuit boards. Each part features 8000 wires – make an error on wire number 7999 and you have a tiny little paperweight. With luck, the wires can be stripped and the board reclaimed, but the error still slows down manufacturing and impacts yield.

On the upside, even a modestly performing camera can provide more than enough resolution for most applications. "Depending on the application, you might want to go with a low-resolution camera, just to save money,” says Corbin Voigt, technical director of software engineering at DWFritz. “Typically, our stages have 0.2- to 1.0-µm encoders on them, while our cameras are 3-to 9-µm per pixel, so you could have a stage with more resolution than our camera if you don’t take into account subpixel interpolation.”

In one example of motion-guided inspection, the motion system triggers the camera and the system can typically review parts in as little as 2 seconds each. Each machine can integrate up to 15 cameras, triggering five to 10 times as the part progresses through the machine. The team built its own strobe lights and strobe controllers for the application. To minimize subpixel blur, the typical strobe duration is 30 µs. Inspection takes place on the fly – operators load a cassette into the machine and the system takes care of the rest.

The importance of lighting
Lighting plays a key role in the quality of the images and the performance of the system. Just a small change in form factor can have a startling effect. "Multiple surfaces, different shapes, different colors, that can really affect a system," says Ann Rogers, senior vision engineer at DWFritz."Most customers don’t understand that until they’ve lived through the process of development a couple of times." This relationship makes it doubly important to double check lighting effects any time you swap out parts.

According to Voigt, customers increasingly request machines designed to be modified to run new parts with minimal change and expense. That requires forethought and a good assessment of trends – will future products be larger? Smaller? More complex, and thus requiring additional process steps? The more information the end-user can walk in with, the better off everyone will be.

It's also important to plan for maintenance. If a motor or drive fails, it can be replaced with a new one, but the end user needs to ensure that the machine continues to perform as designed. "The calibration of the camera space to the motion space is probably the most difficult part to recover from," says Voigt "Typically we’ll provide datum features or fiducial in our machine and then rely on the same kind of fiducial's on parts to help calibrate those two spaces together, all with the correct lighting of course." The DWFritz team supplies a calibration part matched to the size and characteristics of the customer part. If a motor has to be swapped out, the space needs to be recalibrated and transforms between the motion system and the vision system and the real world all need to be correlated.

A software solution
At Universal Robotics, the team takes a different tack, using interactive-intelligence software that can learn to recognize objects never seen before in random locations that vision software cannot do on its own. Leveraging sensor input and high-speed parallel processing, sophisticated algorithms guide actuators to recognize randomly oriented parts, pick them up, and place them in a desired position. Universal partners with graphics specialist NVIDIA to enable processing faster than the robot’s physical speed.

To recognize a part – a box, for example – a conventional vision system needs a set of characteristics that it can match up to the definition of a “box,” such as shape, a label in a certain region, or dimensions that fall within certain parameters. The Universal software is different. It mimics the way human intelligence recognizes a “box.”  It identifies common patterns in more than 120 parameters such as a roughly rectangular shape, open flaps on top, tape or no tape, etc. The interactive intelligence algorithm works with this type of information to learn how to recognize the general shape of a box, regardless of the dimensions or variety of labeling. Applying a pattern-matching technique makes it both flexible and resilient; for instance, the team is currently installing a system capable of identifying 400,000 different parts.

In application, the software can accommodate a high degree of natural variations in the material being handled, where traditional vision is prone to fail. For example, in one installation, palletized boxes are removed from a deep freeze for depalletizing onto a conveyor. The variation in ambient temperature, humidity, and time out of the freezer resulted in completely random surface icing, changing the appearance from carton to carton. The software was able to handle the variation. In the event the system fails to understand what it is seeing, the human operator can help it learn to recognize the new pattern variation in just a few minutes.

Long-term, combining vision and motion delivers a powerful solution that brings big advantages. Still, it must be done right. The biggest misconception? Underestimating the complexity of the task. "I think that a lot of customers see sales brochures of a vision system and assume that it’s easy," says Voigt. “They assume that all of their parts are going to be identical forever and we won’t have to accommodate part variations.” Do your homework. Understand your environments, make sure you design is suitable for the technology. Finally, it can't be said often enough--if you're making the investment in the hardware and software, maximize your results by planning for the future.
 


 
Home | Back to Top
Copyright Information  |  Online Communication Policies of Operation
Copyright 2014, Motion Control Association
900 Victors Way, Suite 140, Ann Arbor, Michigan 48106 (USA)  |  Telephone: 734-994-6088  |  Fax: 734-994-3338