Friday, September 25, 2009

Production Necessity

Jun Mitsudo describes advances in semiconductor inspection enabled by machine-vision algorithms, sensors, and processors


Jun Mitsudo holds a PhD in 3-D shape measurement from Ritsumeikan University (Kyoto, Japan) and is currently assistant manager of the Research and Development Center of Canon Machinery (Kusatsu, Japan). He has been involved with machine-vision technology since the late 1990s.


VSD: What is the mission of Canon Machinery in designing and building machine-vision systems for end users? Which industries do you serve?


Mitsudo: Canon Machinery consists of two business divisions: one that develops machines for factory automation and another that builds die-bonding machines for semiconductor test and assembly. Canon is the largest manufacturer of these machines in Japan and fourth worldwide.
Being committed to investment in research and development in semiconductor production technology, we realize that machine-vision technology is a necessity. Indeed, because semiconductor production equipment is always increasing in complexity, the number of cameras required per machine is becoming larger each year. These cameras are used in a number of automated machine-vision processes including high-accuracy alignment, part recognition, part identification, and optical character recognition (OCR) applications.


VSD: What are end users requiring from Canon Machinery in the design of new systems?


Mitsudo: In factory automation systems, many different features are required that can only be produced at a reasonable cost by closely collaborating with end users. However, in the development of automated die-bonding equipment, the most important criterion is the throughput of the system. To achieve the highest possible throughput, many different technical factors such as speed, accuracy, and robustness need to be considered.


In addition, machine operators must configure these systems as quickly as possible. This is especially important since semiconductor manufacturing is now being performed in developing countries, where an easy-to-use operator interface is critical to the manufacturer's success. In future, these sophisticated operator interfaces will take advantage of different types of sensing technologies including machine vision to detect the status of a system and inform the operator accordingly.


VSD: What technologies and components do you use in these applications?


Mitsudo: Depending on the type of application, the best fitting components that address the different requested features andspecifications of each machine are individually chosen on a case-by-case basis. Because semiconductor devices differ in size, die-bonding machines are required to accommodate many different types. Indeed, the smaller the size of the device, the greater the required throughput of the system.


For this reason, CMOS cameras with programmable regions of interest (ROIs) are especially useful since these ROIs can be dynamically changed depending on the size of the individual IC. These types of cameras also eliminate the necessity to use relatively expensive zoom lenses.
To perform image analysis, we use Halcon from MVTec Software (Munich, Germany; http://www.mvtec.com/) and create our own features based on the library. In the past, we developed our own image-processing hardware or bought off-the-shelf image-processing boards. However, in late 1990, the processing power of the PC increased dramatically and after an extensive evaluation we selected Halcon as our software package of choice.


VSD: What developments in embedded computing, GPUs, multicore CPUs, and multicore DSPs do you see? How will these technologies affect hardware development and how will system designers incorporate these developments?


Mitsudo: Of the different types of hardware currently available, perhaps graphics processing units (GPUs) are the most important. The high level of data parallelism used in these devices makes them an interesting alternative to general-purpose CPUs, especially in image-processing applications where very large images must be processed at high speeds.


For this to occur, however, system designers must have an intimate knowledge of computer architectures, algorithms, signal processing, optics, and mechanical design. In current die-bonding applications, newer algorithms are required to replace gray value edge-based template matching, and we expect such algorithms to be ported to GPU-based machines to increase their speed.


Canon Bestem-D02 is a multipurpose die bonder with a bonding speed of 0.29 s/cycle. The bonder incorporates CMOS image sensors with programmable ROI imaging. Image analysis is performed using Halcon from MVTec and a library customized by Canon Machinery.


VSD: What algorithms and specific software developments do you see emerging in the next five years?


Mitsudo: Different algorithms for 3-D pose calculation and 3-D shape reconstruction must become easier to integrate and maintain. Although these technologies are already practical, their use is limited due to limited acceptance by system designers. In the future, however, sophisticated software interfaces will make such software much easier to use.


VSD: What could vision component manufacturers do to make your job easier?


Mitsudo: In industrial machine-vision systems, the introduction of high-end machine-vision tools for template matching, caliper measurement, and blob analysis has made the development of die-bonding machines much easier. As these features migrate to smart vision sensors, they will become more practical and more widely used on the factory floor.


Other functions such as the fast Fourier transform (FFT), feature point extraction, calibration tools, neural network, and support vector machines (SVMs) are also being incorporated into many off-the-shelf software packages. As system designers, we are committed to providing end users with the best solutions by combining these elemental technologies.


For this, we must test the feasibility of use of each function and this requires an enormous amount of time. Single software packages that incorporate all of these functions therefore prove most valuable.


Because we incorporate ROI processing of CMOS cameras, we can dynamically change image-acquisition parameters to search for any specialized ROI within the image. Because this requires sending commands to the cameras continuously, standard digital interfaces such as Camera Link, FireWire, or GigE are useful in easing the setup of these types of cameras in semiconductor inspection applications.


VSD: In which industries do you see the most growth? In which geographic areas?


Mitsudo: Alternative energy sources have found increased popularity, especially after the price of oil increased to over $140 per barrel. We see this trend continuing with developers looking to produce automated systems for the inspection of solar wafers, solar cells, solar panels, and compact rechargeable batteries.


VSD: What kinds of new applications for machine vision do you expect to emerge? What new software, components, and subsystems will be needed?


Mitsudo: Although many newer machine image-processing algorithms offer high potential, they typically cannot overcome the cost and speed requirements of die-bonding applications. However, looking at future innovations in systems based on DSPs, GPUs, multiple CPUs, or FPGAs, it is likely that such algorithms may soon become practical.


In future, we hope to deploy systems that automatically detect the multiple processing resources available on a system and combine them efficiently for different processing tasks. These systems may perform functions such as point processing and neighborhood operations on an FPGA and perform other functions using a distributed computing system consisting of multiple GPUs or multicore CPUs. From a user's perspective, the use of this hardware must be transparent.

Wednesday, May 6, 2009

Toward a Machine-Vision Benchmark

Following the publication of Vision Systems Design's proposal in November 2008*, Wolfgang Eckstein shows how a machine-vision benchmark could be realized


To develop a successful benchmark for a machine-vision or image-processing system, it is necessary to understand the purpose of benchmarking. Although information about other components such as illumination, cameras, or frame grabbers may be required, it should not be the aim of a vision benchmark to evaluate this hardware.

Any successful machine-vision benchmark (MVB) should evaluate only the software and how it performs on various types of hardware. Results should be presented as they relate to whether a standard CPU or a GPU is used. Having said this, an MVB should not be limited to software packages running on PCs but should also evaluate how image-processing solutions perform on open systems, embedded systems, and smart cameras.

The intention of any MVB should be to bring more transparency into the market for vision software and vision systems. It should enable vision-system users to determine more easily which software is most suitable for the requirements of a given application.

The aim of developing a benchmark should not be to compare single methods such as the execution time of a Sobel filter but to evaluate how well an application can be solved with the software. Additionally, a single benchmark should focus not only on the speed of such applications but also their accuracy and robustness.

This kind of benchmark can be accomplished by supplying machine-vision and image-processing vendors with a set of one or more images stored as image files -- together with a description of the images and the benchmark task.

To develop this type of benchmark, a company or an organization could specify the rules and benchmarks, perform them, and publish the data and results. As a second option, experts within the vision community could propose such rules, which would then be edited by an unbiased third party or by an MVB consortium.

Based on these rules, single benchmarks could be offered by different manufacturers and added to an overall MVB. Everyone in the vision community could then download the MVB and perform the benchmarks. Or the benchmarks could be hosted by a neutral organization, such as the European Machine Vision Association (EMVA) or the Automated Imaging Association (AIA).
In practice, the second option is preferable since the MVB would not be controlled by a single company but would be open to every manufacturer. Furthermore, this approach would facilitate the development of an extensible MVB and, because the results would be visible to the whole community and to end users, every manufacturer would have a vested interest in ensuring that the MVB is up to date by using their latest software. This would ensure the MVB remains viable and always contains relevant information.

Rules for a benchmark
In the development of an MVB, certain rules first need to be established. This could include a description of a task to be solved and how the benchmark data was generated.

Benchmarks would be chosen from classical fields of image processing, like blob analysis, measuring, template matching, or OCR. Such benchmarks require a general statement of the task to be accomplished—without restricting the selection of operators. Alternatively, a specific -- but widely needed -- feature of a tool should be analyzed, such as the robustness of a data code reader that is used to read perspectively distorted codes.

Finally, a benchmark must specify how the data used are generated -- whether they were generated synthetically (or modified) or whether the image used was captured from a camera. For general documentation purposes, it would be useful to specify further data such as the optics and camera used for acquiring the test images.

In addition to data, there must be a clear description of the task that must be solved. It is important that the solution is not limited and that any suitable software can be used.

Benchmark results must specify which information was used to solve the task. For example, it must be clear whether the approximate location of an object or the orientation of a barcode was used to restrict the search for a barcode within an image, because these restrictions influence speed and robustness.

MVTec proposes a number of benchmarks, each of which consists of a set of image sequences. Each sequence tests a specific behavior of a method. Within each sequence the influence of a "defect" is continuously increased. For example, in template matching, a sequence of a PCB position could be generated by successively changing the distance to check for robustness against defocus.


To motivate many companies and organizations to perform the MVB, it is important that the results be transparent. To accomplish this, each manufacturer or organization must show the specific version of the software that was used, the hardware that the software was run on, and the benchmark's execution time.

Various methods of image processing also require the tuning of parameters used within a specific software package. Since these parameters might differ from the default values, they must also be specified. Optional information could also include the code fragment used to solve the benchmark task. This would allow users to learn more about the use of a given system and to perform the same test.

How to perform a benchmark
After developing the MVB, the benchmark data and its description should be made freely available. Based on these benchmarks, each manufacturer can develop optimal solutions, perform them, and provide the results. After checking whether the rules are fulfilled for each specific task, the results would then be tabulated and be made freely available to others to cross-validate the published data.

To begin the development of an MVB, these single benchmarks should be easy to understand, have clear semantics, cover typical machine-vision tasks, and allow an easy comparison of vision systems.

MVTec proposes a number of benchmarks (see below), each of which consists of a set of image sequences. Each sequence tests a specific behavior of a method. Within each sequence the influence of a "defect" is continuously increased. In template matching, an original image of a PCB could be generated and then successively defocused to provide a specific image sequence (see figure). The quality of specific software can then be measured by the number of images that can be processed correctly. The tests would check the speed, robustness, and accuracy of each application task.



For each test sequence, typically 20-40 images of VGA resolution would be required. Since one image typically has a size of 200 kbytes using, for example, the PNG format, this results in a total size of about 500 Mbytes for all the benchmarks listed.

MVTec would offer these test images together with the appropriate task descriptions, if a neutral organization such as the EMVA or the AIA would be willing to host it. Besides this, MVTec invites other manufacturers and users to an open discussion to bring the idea of an MVB forward to increase transparency in the machine-vision market.

Wolfgang Eckstein is managing director of MVTec Software, Munich, Germany; http://www.mvtec.com/.

* "Setting the Standard: Despite the myriad machine-vision software packages now available, there is yet no means to properly benchmark their performance," Vision Systems Design, November 2008, pp. 89-95.

Thursday, April 9, 2009

Signal Architecture

Shuvra Bhattacharyya explores how emerging hardware platforms enable more advanced software for image-processing applications


Shuvra Bhattacharyya is a professor in the Department of Electrical and Computer Engineering, University of Maryland at College Park, and holds a joint appointment in the University of Maryland Institute for Advanced Computer Studies and an affiliate appointment in the Department of Computer Science. He received his BS from the University of Wisconsin at Madison and PhD from the University of California at Berkeley.


VSD: Could you provide us with some background information on your experience?


Bhattacharyya: My research interests include architectures and design tools for signal-processing systems, biomedical circuits and systems, embedded software, and hardware/software co-design. Before joining the University of Maryland, I was a researcher at Hitachi America Semiconductor Research Laboratory (San Jose, CA, USA) and a compiler developer at Kuck & Associates (Champaign, IL, USA). I'm presently the chair of the IEEE Signal Processing Society technical committee on design and implementation of signal-processing systems.


Books that I have co-authored or co-edited include Embedded Multiprocessors: Scheduling and Synchronization (second edition to be published by CRC Press in 2009); Embedded Computer Vision (Springer, 2008); and Memory Management for Synthesis of DSP Software (CRC Press, 2006).


VSD: Which aspects of image processing interest you? What current research are you or your students pursuing?


Bhattacharyya: My research group at the University of Maryland--known as the Maryland DSPCAD Research Group--is focused on design methodologies and CAD tools for efficiently implementing DSP systems.


The objective of our work in the area of image processing is to develop programming models that capture the high-level structure of image-processing systems. We are also looking at analysis techniques for deriving implementation properties such as memory requirements and processing throughput from these representations. And we are looking at synthesis techniques for deriving optimized implementations on different kinds of target architectures, including programmable DSPs, FPGAs, and embedded multiprocessors.


The programming models we work with are based on dataflow principles and specialized to the area of signal processing, including applications that process signals from image, wireless communication, audio, and video streams. By applying specialized programming models, our methods are able to efficiently expose and exploit high-level computational structure in signal-processing applications that is extremely time consuming or impossible to derive from general-purpose program representations.


Some particular challenges in applying dataflow-based design methodologies to image-processing systems include incorporating multidimensional data into the formal stream representations used by the programming models and managing the large volumes of data and high performance requirements. In addition, increasing use of image processing in portable, energy-constrained systems makes it important to incorporate methods for aggressively optimizing power consumption while maintaining adequate image-processing performance and accuracy.


Two image-processing domains that I have been specifically involved in developing new design methods and tools for are distributed networks of smart cameras and medical image registration. The first is through an NSF-sponsored collaboration with Rama Chellappa (University of Maryland) and Wayne Wolf (Georgia Institute of Technology); and the second is through a collaboration with Raj Shekhar and William Plishker, who are jointly affiliated with the schools of Engineering and Medicine at the University of Maryland.


VSD: How do you think this research will impact future generations of image-processing and machine-vision systems?


Bhattacharyya: I think that research on dataflow programming environments and tools will allow designers of these future systems greater flexibility in experimenting with different kinds of embedded processors and heterogeneous multiprocessor platforms. Most dataflow-based tools for signal processing operate at a high level of abstraction, where individual software components in conventional programming languages (e.g., C or Verilog/VHDL) are selected based on the back-end tools associated with the targeted platform.


These platform language components are interfaced through dataflow-style restrictions and conventions that allow for the inter-component behavior to be analyzed and optimized using formal dataflow techniques. The output of these tools is an optimized, monolithic implementation in the selected platform language; or, for heterogeneous platforms, the output is a set of multiple, cooperating platform language implementations. This output can then be further processed by the toolchain (e.g., the C compiler or HDL synthesis tools) associated with the target platform.
This kind of design flow provides a number of advantages that are promising for next-generation image-processing and computer-vision systems. First, the emphasis on component-based design--where components adhere to thoroughly and precisely defined interfacing conventions--facilitates agile, high-productivity, modularity-oriented design practices.


Second, the use of dataflow as effectively a source-to-source framework in terms of the platform language provides for efficient re-targetability across different kinds of platforms, and allows designers to leverage the often highly developed, and highly specialized back-end tools of commercial embedded processing platforms. This provides a complementary relationship between the high-level design transformations, which are handled effectively by dataflow tools, and low-level (intra-component) optimizations and machine-level translation, which are best handled by platform tools.


A general challenge facing this kind of two-level design methodology is the overhead of inter-component data communications, which can sometimes dominate performance if it is not handled through a more integrated design flow. I expect that designers and tool developers will continue to make advances in this direction by using techniques for carefully controlling the granularity of components, using block processing within components, and exploring new ways to model and optimize the mapping of component interfaces into hardware and software.



Dataflow graph that represents an accelerator for evaluating polynomials. Each circle or oval represents a computational operation; the arrows that connect operations specify how data passes between operations. Annotations specify certain properties about the rates at which the incident operations produce and consume data. The operation labeled "controller" (broken out on right) has a hierarchical "nested" dataflow representation. (Adapted from Plishker, W. et al, Proc. International Symposium on Rapid System Prototyping, pp. 17-23, Monterey, CA, June 2008).


VSD: What developments in FPGA design will affect hardware developments and how will system designers incorporate them?


Bhattacharyya: I think that support for heterogeneous multiprocessing in FPGAs--both in terms of rapid prototyping and developing high-performance implementations--will contribute significantly to the increased use and customization of such multiprocessor technologies in image-processing systems. Modern FPGA devices provide valuable platforms on which designers can experiment with different multiprocessor architectures, including different combinations of processing units and different kinds of networks for inter-processor communication. This opens up a valuable dimension of the design space that must be explored more deeply to achieve the most competitive implementations of next-generation applications. Both "hard" and "soft" processor cores play useful roles in FPGA-based design methodologies and applying these methodologies to develop embedded multiprocessor systems. Although soft cores incur significant penalties in terms of performance and resource utilization, they are relatively easy to configure in different ways to experiment with different numbers and kinds of processors, and get an idea of how an application will map onto and scale with different system architectures.


This kind of rapid prototyping approach allows designers to develop much better intuition about system architecture alternatives before investing large amounts of specialized effort developing or applying a specific multiprocessor platform. On the other hand, hard processor cores, together with signal processing accelerators and other kinds of specialized IP blocks, provide valuable frameworks for accelerating image-processing applications in performance-oriented production systems.


VSD: Recent software developments in image processing include pattern recognition, tracking, and 3-D modeling. What algorithms and specific software developments do you see emerging in the next five years?


Bhattacharyya: I expect an accelerated use of heterogeneous platforms for image-processing software development, such as platforms involving combinations of GPUs and CPUs, or multiprocessors and FPGA-based accelerators. Heterogeneous platforms allow for more streamlined implementation, including exploitation of different forms and levels of parallelism in the application, and efficient integration of control and data processing.


The use of heterogeneous platforms, however, is conceptually more difficult, and the associated design flows are more complex. I expect increased attention to and application of frameworks that are aimed at application development on heterogeneous multiprocessor platforms. Some examples of emerging frameworks in this space are the open computing language (OpenCL), which is geared towards platforms that integrate GPU and CPU devices, and openDF, which is a dataflow-based toolset geared towards platform FPGAs and multicore systems.

Tuesday, December 16, 2008

Process Control



A discussion with Lou Fetch, Performance Automation
Lou Fetch is with Performance Automation (Loveland, OH, USA; www.performanceautomatio ninc.com), which was established in 2005. Editor in chief Conard Holton talked to him about inspection systems and implementing SPC practices.




A discussion with Lou Fetch, Performance Automation
Lou Fetch is with Performance Automation (Loveland, OH, USA; www.performanceautomatio ninc.com), which was established in 2005. Editor in chief Conard Holton talked to him about inspection systems and implementing SPC practices.



VSD: Please describe your company and its services. What is the origin of your company?

Fetch: Performance Automation is a system integrator, specializing in machine-vision inspection solutions. Our roots came from the factory automation distribution business. As a result, we recognized that our customers had a critical need for machine-vision integration to improve their productivity and their competitiveness in a changing world economy.

VSD: What technologies and components do you use for your applications? How often do you evaluate competing technologies?

Fetch: Lately, the majority of our applications have been PC-based solutions with higher-resolution cameras and with one axisof motion. We have used motion to accurately position the inspection item under the camera; other times we will present the camera to the inspection item. We have begun moving away from FireWire and using more GigE cameras. National InstrumentsVision Builder software has solved several of our machine-vision and control applications, but we have also used custom C# programming to solve specific applications. Using new technology for the first time presents a certain amount of risk to the overall success of the project. Customer applications have challenged us to expose new technologies or techniques. New approaches could also translate to a more robust solution. Using established vendors with a solid reputation for support, quality products, and service will limit that risk.

VSD: How do you approach a new application? Do you work with OEMs or other systems integrators?

Fetch: Some project opportunities are with clients using machine vision for the first time or who are relatively new to the technology. We take the time to walk them through the process of a successful application. It is essential for us to take a customer sample, acquire an image, and use basic vision tools to demonstrate the feasibility of the application, and assess and then discuss the risk. Our projects also provide for customer training not only for the delivered solution but how to apply machine vision in general to ensure that they have basic knowledge for simple troubleshooting of the installed project. In addition, the training allows the customer to better identify other practical applications for machine vision in their operation. In our fixed-cost project proposals, we often provide a 3-D model of the system layout with a detailed description of the deliverables. We refer to this approach as "Concept to Solution." 3-D models of recent projects have helped the client visualize what the solution will look like. A set of fabrication drawings can then be generated from these models (see Fig. 1).



Click here to enlarge


Click here to enlarge

FIGURE 1. A test station, first developed in a CAD drawing, is used to inspect molded automotive terminals. The customer uses machine vision for defect inspection previously done manually. An SPC application will be used to determine the capability of their process in more detail. They currently do not have an efficient method to gather dimensional data.

We are not currently using CAD with motion or to model lighting. However, at a technical conference last August we saw a presentation on the use of software to model machine-vision lighting and lenses, and we are curious to learn more. It is not uncommon for seemingly simple applications to take hours or more worth of effort to work out the light and lens details. Software tools would certainly be a timesaver.

VSD: How do you design your systems for OEM product obsolescence?

Fetch: To protect our customers' interests, we use commercially available products from vendors that are industry leaders. Commercially available products will also give the customer vendor options. For instance there are several vendors that can supply FireWire or GigE cameras. Major vendors also often have an upgrade path when a product does become obsolete.

VSD: In which areas do you see the most growth? What are users demanding from you in the design of new systems?

Fetch: One common element of recent projects has been higher-resolution cameras and statistical process control (SPC) reporting. We use standard SPC tools to show our clients what their process is capable of producing before we set pass/fail machine-vision criteria. This helps the client understand where to set machine-vision limits and ultimately assists them in improving their process. This also allows our clients to identify process improvements and then measure and quantify those improvements.


Click here to enlarge

FIGURE 2. Standard SPC tools developed by Performance Automation show clients what their process is capable of producing before pass/fail machine-vision criteria are set. This helps understand where to set machine-vision limits, identify process improvements, and measure and quantify results.

Our typical SPC GUI will have main tabs that display the current image, last failed image, and a statistics tab. The statistics tab has additional tabs that display information about a process variable. Each process variable is recorded for common process statistics such as Avg., Std. Dev., Max and Min values, Median, and capabilities CP and CPK. The second tab for each process variable has a histogram of the measured variable. After the major variables are identified and the customer has inspected a suitable sample size of parts, we then turn the data into information they can use. This statistical process information can now be used to set the accept/reject limits for the vision tools. One customer has used this information to improve their molding process.

VSD: In which geographical areas do you work?

Fetch: We are a machine-vision integrator located in southern Ohio. Our primary marketing area is about a 500-mile radius.Although, for our global clients, we have on occasion traveled outside that radius, we have not executed a project outside of the US yet.

VSD: What new markets and technologies do you see for your company in the future?

Fetch: Honestly, there is currently enough business in the traditional markets that have been generating business for us. There are still plenty of applications to inspect molded and machined parts and labels, for instance. One interesting trend of late has been inquiries for machine vision in analytical applications. One professor asked us about using machine vision to monitor the activity of fish embryos in a Petri dish. Another client asked about using machine vision to quantify the amount of a compound that fluoresced under UV light. An emerging technology that is intriguing to us is time-of-flight cameras. Also, thermal imaging offers opportunities in certain markets.

Thursday, October 16, 2008

A discussion with Ignazio Piacentini, ImagingLab


A discussion with Ignazio Piacentini, ImagingLab

Ignazio Piacentini is director of ImagingLab (Lodi, Italy; www.imaginglab.it). He has a BSc in nuclear engineering (Milan, Italy) and an MSc degree in digital systems and instrumentation (Polytechnic of Central London, UK). He spent many years designing control and data acquisition systems for the thermonuclear fusion research community before joining the machine-vision industry. Editor in chief Conard Holton discussed trends in machine vision systems, software, and integration with him.




VSD: What sort of vision systems or services does ImagingLab provide? What is the origin of your company?
Piacentini: ImagingLab is a small high-tech company whose mission is to offer its knowhow to system integrators, machine builders, and end users with the need to adopt innovative technologies in the field of machine vision and robotics, minimizing their investments and shortening their learning curve. Our core business is machine vision for industrial robotics, with what I consider a difference from most robotics companies: We have arrived at robotics from vision. Therefore, the design of our systems is vision centric with a very tight integration of the robotics and vision software, essentially leading to a single programming/configuration platform.

We are an atypical integrator, more akin to a design/engineering bureau. We work in close partnership with other companies (typically much larger) from the early design phase to a validation prototype. The actual construction of "machines" is remanded to our partners, but we follow closely the final engineering phase, the deployment of the machines to the factory floor, and the introduction of improvements arising from the everyday operation of the end users. We also provide the necessary technology transfer to facilitate the adoption of robotics and vision to both the machine manufacturers and to the end user.

I had the first glimpse of machine vision in the early 1980s, while working at the JET Project (Culham Laboratories, Abingdon, UK), a large thermonuclear fusion experiment. The personal computer revolution had not yet occurred and the vision algorithms were running slowly--an oxymoron?--on a PDP 11. In 1991 I left the research community and moved to machine vision. In 1995-96 I was directing Graftek Italy and took part in the discussion and negotiation between Graftek France and National Instruments (Austin, TX, USA; www.ni.com) for the acquisition of a LabVIEW-based image-processing library, which led to the development of the machine-vision products line in the following years. After a spell with NI as a European business development manager for machine vision, in 2004 I left to start ImagingLab.

VSD: What technologies and components do you use in machine-vision-related applications? How often do you evaluate competing technologies?
Piacentini: Software plays the largest role in our system development and is obviously based on LabVIEW and the related vision library. With a company start in 2004, the choice was to go fully digital with the adoption of IEEE 1394 (FireWire) high-resolution cameras, while Camera Link has been used for a few high-end applications. In terms of CPUs, LabVIEW and its Real Time version allow us to develop conveniently under Windows [XP Pro since we are not great fans of Vista], while deploying the application software on a number of different targets from industrial-panel PCs running XP embedded to a number of RTOS boxes and smart cameras.

GigE has slowly trickled into our applications but has not yet significantly offset FireWire. The communication with the robots and machine interface is based on a variety of standard protocols ranging from TCP-IP over Ethernet to Modbus. Ether-CAT will facilitate the use of remote deterministic I/Os.

3-D vision is becoming increasingly important, especially in conjunction with robotics, and we have very recently developed a LabVIEW tool-kit, under contract with SICK-IVP (Linkoping, Sweden; www.sickivp.com), for their Ranger series of laser-scanning cameras.

Concerning competing technologies, we keep a vigilant eye on all the novelties appearing on the machine-vision market, often evaluating new products with hands-on trial, while we are rather cautious in their immediate deployment.


VSD: In which areas of the industry do you see the most growth? What are users demanding from you in the design of new systems?
Piacentini: We are heavily biased towards machine vision for robotics. Roughly 100,000 robots are sold around the world every year. This number is increasing steadily. The percentage of robots making use of vision is also increasing. European manufacturing companies are facing strong competition from China and India and can only compete by adopting more and more concepts of flexible manufacturing, which in turn leads to more advanced automation based on robotics and machine vision.

Our systems are targeted to the manufacturers of machines that perform some kind of automated production or assembly cycle. Culturally, these companies have a strong mechanical background, which has to move towards the more recent world of mechatronics. The success of our applications/systems is linked to the overall performance of the machines in terms of cycle time but also in the ease of use of the technologies we are offering. "Ease of use" encompasses a number of issues: simple end-user interface, tools to facilitate the commissioning and startup of the machine, self-calibration tools [there is a need to correlate the "looseness" of pixels deriving from optical distortion of various kinds to the more rigid space of robot coordinates], and remote diagnostics and maintenance.

VSD: How will OEM components targeted towards machine-vision applications have to change to meet future needs?
Piacentini: I would like to see some attempt to standardize the data sheets of all the components that are part of the acquisition chain, from sensors and cameras to illumination devices and optics. Comparing the merits/quality of individual components is today rather difficult and can hardly be done based on the published data. Think about, for instance, how to compare the sensitivity and noise level of two cameras from different vendors or how to evaluate the uniformity and intensity of illumination of an array of LEDs at a given distance.

VSD: Could you discuss the machine-vision market in Italy and compare the machine-vision markets in different industry segments in Europe?
Piacentini: In the contest of the European Union, Italy is fourth in terms of gross domestic product, but is second after Germany in the production of "manufacturing machines." Seventy percent of these machines are exported. This obviously has an influence on the machine-vision market, machine vision being used to automate the production cycle as well as a quality control instrument. There are also other market segments like food production and packaging that represent a potentially large market share of the machine-vision market.

An historical challenge for machine-vision companies active in the Italian market is the fragmentation of the market itself and the inherent small size of the machine-vision system integrators, which fall well below the European average of 38 people per company. ImagingLab has a team of only eight people, yet is considered a medium-sized integrator! More information on the peculiarities of the Italian machine vision market can be found on a presentation I gave during the 2008 EMVA conference in Berlin.

VSD: What machine-vision algorithms and specific software developments do you see emerging in the next five years?
Piacentini: Algorithms for 2-D imaging have reached a reasonable level of maturity and completeness, and the fairly recent addition of geometric pattern matching to the more conventional one based on normalized cross-correlation has vastly improved robotized pick-and-place operations. The machine-vision libraries available from various vendors offer more than just the pure algorithmic content--they also offer a very good level of user interface, simplifying the understanding and the interpretation of the results.

With the advent of 3-D cameras capable of generating spatially calibrated 3-D images (either as a cloud of points or correlating the z dimension to the grayscale level), there is a lot to be done to reach the same level with a 3-D vision library. It is sufficient to think of the increase in complexity required for the reliable detection of a specific pattern once perspective and different 3-D positionings in space have been taken into account.

Processing speed will also benefit from the increased availability of multicore CPUs, though some rethinking will be required at the algorithmic level to be able to distribute data processing on these new hardware architectures.


VSD: What kinds of new applications and industry trends do you expect to emerge in the future?
Piacentini: can think of many, but solving and generalizing bin-picking as opposed to palletizing parts is one of the current dreams of many companies involved with flexible manufacturing: an application that can most likely be solved with a careful combination of 3-D and 2-D imaging. I also see machine vision becoming an integral part of robotics and no longer an external add-on, with the possibility of using vision-derived information in the kinematics control loop to improve speed and positional accuracy.

As the price/performance of machine-vision systems continues to improve, more and more systems will be deployed, with quality control becoming distributed during the whole process rather than being confined to the end of the production line.

Wednesday, July 30, 2008

Shape of things to come


A discussion with Rainer Obergrußberger, in-situ

Rainer Obergrußberger has an extensive background in image processing and analysis and is the managing director of in-situ (Sauerlach, Germany; www.in-situ.de). Editor in chief Conard Holton spoke to him about integrating machine-vision systems and developing innovative products.



VSD: What does your company do; what services does it provide?
OBERGRUßBERGER: in-situ is a growing company with more than 20 years of experience in the fields of image processing and machine vision. We specialize in industrial, medical, and scientific applications, offering image-processing systems in a broad range of products, as well as specialized development of hardware and software. Our business units are PC vision, smart-camera applications, and OEM products. In-situ means literally “without change of location.” In our business, in-situ means measurements using noncontact methods.

VSD: What is your personal background in the machine-vision industry? OBERGRUßBERGER: After an apprenticeship in the electronic industry, I studied computer science at the University of Applied Sciences in Rosenheim, where part of the course was focused on image analysis. During the last few years of my course I worked as a parttime student for the image-processing lab at the university. I also worked part-time for the company vidisys on software for a tool presetting machine, to capture and measure geometrical shapes of metal-cutting tools with a camera setup.

I started at in-situ in 2001, first as an application engineer, later as CTO, and since the beginning of 2007 I am the managing director. Three years ago I started a master’s degree course with focus on computer graphics and image analysis. My master’s thesis was about the analysis of patterns in the biological field, and I was lucky enough to spend eight months at the University of Queensland, Australia, to complete this work.

VSD: What technologies and components does in-situ use? How do you evaluate competing technologies?
OBERGRUßBERGER: We use a wide range of hardware from different companies, depending on the application we want to develop. Analog cameras are now playing only a minor role in the applications of in-situ. Digital interfaces are more flexible regarding signal transfer and scalability.

CMOS cameras still lack sensitivity and image quality, but are usually cheaper than CCD-based hardware and often sufficient. As more and more digital camera types support standard computer interfaces, such as FireWire or Gigabit Ethernet, frame grabbers are often no longer needed. To evaluate competing technologies we must sometimes make lengthy tests with new hardware to discover the benefits and limitations of the new technology. As this can be quite time-consuming and expensive, we always try to rely on an approved hardware for as long as possible. We don’t get on every new technology train that comes along, just because it’s trendy! Usually it takes some time for a new technology to compete with or even replace an old but well-proven one. In general, the competence of a solution provider such as my company, is to decide which technology, including camera, illumination, and frame grabber, best fits the requirements of a customer´s application.

VSD: How do you approach a new application? Do you work with OEMs or other system integrators?
OBERGRUßBERGER: A new application is always a challenge, especially when there is little or no experience with the customer´s products or business. In my opinion, you should spend as much time as possible exploring what the customer´s application is all about and also thinking about what possibilities exist to provide the best solution. For example, in the beginning I don’t just speak to the boss of a company, but also to the worker who is confronted with a certain quality problem in a production line every day.

I find it best to have a look at how the company deals with certain quality problems, and at internal company processes. To fulfill a customer’s requirements, you have to completely understand how the company works. We always start an application by developing a proof of principle. If such a proof doesn’t take longer than one day, we do this for free. It’s the first step to show a potential customer the capabilities of the company. If possible, we try to get test samples covering the complete product spectrum, from good parts to faulty parts.

We’ve got various optical setups that we can use to test the provided parts. These include laser scanners, linescan and areascan camera setups with different resolutions and lenses, as well as smart cameras. We don’t develop hardware such as cameras or frame grabbers, we simply buy them from vision component suppliers such as Stemmer Imaging (Puchheim, Germany; www.stemmer- imaging.de) or Cognex (Natick, MA, USA; www.cognex.com). However, it’s a bit different with illumination. One of the most important things for providing a proper vision solution is having the most appropriate illumination concept. If standard illumination can’t satisfy the requirements, we develop special ones ourselves.

VSD: How do you design your systems for product obsolescence?
OBERGRUßBERGER: Unfortunately, normal obsolescence of a vision system always comes along with the life cycle of the product that the system was developed for. To be able to fulfill further unknown requirements, hardware and software should be flexible to different configurations and new tools.

At in-situ, we increasingly try to develop software in a reusable way. A proper camera interface should support many kinds of cameras— those available now and in the future. As we can’t handle the wide range of hardware alone, we use standard software tools such as Cognex Vision Pro or Stemmer’s Common Vision Blox. Software should contain options for data storage, statistics, and an easy-to-use graphical user interface. Quite often it is a trade-off between maximizing flexibility on the one hand and software that is well adapted to the customer’s requirements on the other.

VSD: What algorithms and specific software developments do you see emerging in the next five years?
OBERGRUßBERGER: Many of the basic ideas of image analysis were founded in the 1970s, including edge detection, template matching, or image filters. However, the algorithms have improved greatly in the last 20 years, and new approaches have been created. Higher computing performance allows better and more complex algorithms to be calculated in real time and, therefore, can be used for machinevision applications.

Vectorized pattern-matching has opened new dimensions in pick-and-place applications, as well as in product identification. Twenty years ago such algorithms would have been called “coffee algorithms,” as there would have been enough time to drink a good cup of coffee in the time it took to process a single image! Another good example is the shape-from-shading principle that we picked up three years ago. This also originated in the 1970s. We worked on optimizing hardware and software and are now able to extract 3-D surface data in real time. I think there will be new approaches for better and faster 3-D pattern-matching in the next five years. The “grip in a box” is still hard to solve and probably one of the biggest challenges in robot applications—in this case, a reliable and fast 3-D model finder would revolutionize the possibilities. In addition, tracking has become very important for camera surveillance and automatic guidance systems. Automatic and dynamic traffic control supported by cameras will be the reality in the near future.

VSD: How will OEM components targeted toward machine-vision applications have to change to meet future needs?
OBERGRUßBERGER: I think that OEM components will have to be packed with intelligence, reliability, and flexibility, and that there will have to be standards. The vision sensor market has been increasing a lot in the last few years. Traceability opened a broad and lucrative market for ID reading and verification. OEM components for complex solutions will always have to be integrated by professional solution providers such as my company, but the vision-sensor market is targeted toward the machine builders with less experience in machine-vision issues. Therefore, vision sensors have to be easy to use, easy to understand, and very stable in different environmental conditions.

For interaction between different systems there will have to be standards for communication protocols, for hardware adaption (for example, power supply, cables, and mounting) and for software interfaces. For in-situ, the GenICam standard has already been a large and important step toward a common and generally used camera description language. But it’s not just the components that have to change to meet future needs. In addition, the way in which vision solutions are provided and integrated will have to be increasingly customer-oriented. Vision systems shouldn’t be just a black box in a production line. Customers require adequate training on a system to learn how it works and how they can fix certain problems themselves. Every system, however small and simple it might be, must be properly documented and described to the customer.

VSD: What new markets and technologies do you see for in-situ in the future?
OBERGRUßBERGER: In the next few years the company wants to establish a new business unit in 3-D surface inspection. After a major modification in hardware and software, we developed a new shape-from-shading technology that allows reconstruction of 3-D surface data from a wide range of materials. With DotScan—a vision system to inspect embossed Braille printing on pharmaceutical folding boxes—we have already shown that this principle works well in an industrial environment. Our patented inline version, called SPARC (surface pattern analyzer and roughness calculator), allows the inspection of surfaces in various fields of view and accuracies down to a few microns on moving parts. In November 2007, SPARC won the Vision Award prize for applied and innovative vision technology at the VISION trade fair in Stuttgart, Germany.

Monday, June 23, 2008

Camera Smarts




A discussion with Brad Munster, Visionary Technologies



Brad Munster has a background in electrical engineering and engineering sales and is the owner and president of Visionary Technologies (Holland, MI, USA; www.vis-tech.com). He has been working with smart cameras for more than 11 years. Editor in chief Conard Holton spoke to him about trends in machine-vision systems and service.




VSD: Please describe your company and its services.
Munster: Visionary Technologies is a machine-vision integrator that specializes in the development and deployment of smart vision camera systems. The company provides customers with turnkey inspection machines, retrofits cameras to existing manufacturing lines, and services existing camera systems. I have been in the automation field and with manufacturing companies for 15 years— the first three with a distributor in Australia. We focused on products to serve the mining industry. I later worked as a sales engineer for a high-tech distributor for industrial automation in Michigan and then for a system integrator before starting Visionary Technologies about five years ago.

VSD: What technologies and components do you use for your applications?
Munster: Visionary Technologies uses smart cameras for 90% of its applications. This is something our customers are more familiar with and therefore not as hesitant to implement. The cameras we use come from manufacturers such as Cognex, DALSA IPD, Keyence, National Instruments, and PPT Vision. We continually evaluate all latest technologies to stay current and provide our customers with the best solution within their budgets. We try to choose the best system for the application and not focus on the name on the side of the camera. Each manufacturer has strengths and weaknesses with its products and algorithms. Also we see customers favoring one camera brand over another, and we provide them with a realistic assessment of which system would be best for their application.

VSD: How do you approach a new application? Do you work with OEMs or other system integrators?
Munster: Machine vision is often an oversold technology, or customers think it is a magic pill. Many customers believe that the camera can inspect anything within the field of view—and sometimes out of it—and that it should work flawlessly out of the box. From the first meeting we endeavor to walk them through the integration steps and reset them to a more realistic set of expectations. Some customers think that a camera is like other pieces of automation equipment—once it is programmed everything is done. We prepare them for a lot longer debug time. Instead of two to five days we try to have them expect two to five weeks, or longer, depending on the variables. During this time we often find manufacturing issues that the customer did not believe were relevant or possibly did not know about. Some of our largest customers are other system integrators. In today’s market very few companies have the resources to have a person solely dedicated to a specific task such as a machine vision programmer. This is a problem when you are trying to implement machine vision because you need to be working with this technology every day or you will lose or have to relearn skills and techniques. Some system integrators or machine builders may implement a camera system once every three to six months. We are working on six to eight different projects per week. This keeps skills sharp and also exposes us to many different types of applications and therefore increases our knowledge base and techniques.

VSD: Recent software developments in image processing include pattern recognition, tracking, and three-dimensional modeling. What algorithms and specific software developments do you see emerging in the next five years? Munster: The company has started to develop its own set of tools and algorithms in any given platform. The software or hardware manufacturer often gives you a great starting place for basic inspections and typical applications, but customers can run into problems when they can’t go beyond standard software tools (see photo). We have developed more advanced tools and techniques that let us ‘go under the hood’ by combining multiple tools to make the customer’s algorithms more robust and error proof. Some of the new software developments will probably be based around better OCR and Data Matrix tools. These are becoming more prevalent, with traceability and accountability.

VSD: How do you design your systems for OEM product obsolescence?
Munster: Customers should consider the camera system as an asset and therefore keep in mind potential redeployment options in the future. Redeployment and retooling are definitely new buzz words on the manufacturing floor. Many of the smart cameras can be upgraded to newer versions of software and firmware. Sometimes for upfront incremental cost, the customer can purchase a platform that is more versatile for future use rather than for the immediate application. We have redeployed obsolete camera systems for new applications due to customer budget constraints. Obviously we need to determine in advance if this hardware and software will meet the requirements and if it can be integrated efficiently. Sometimes, a current camera system will offer more advanced software tools that will reduce integration time and therefore lower overall cost of the project.

VSD: In which areas do you see the most growth? What are users demanding from you in the design of new systems?
Munster: Smart camera systems still seem to dominate most factory floors. These systems are becoming easier to use, and the hardware cost continue to go down. While PC-based systems seem to be entering the main stream as customers become more camera savvy, they are still more difficult to deploy and maintain. The bottom line is customers are always demanding the most cost-effective solution to solve their problems. I think they realize more today than they did five years ago that cost of ownership is much more important than up-front cost.

VSD: How will OEM components have to change to meet future needs?
Munster: I think future-proofing projects is something that will help the longevity of any product. Many of the components we use are very versatile. Vision applications change so much from design to implementation that we need components that can change with the application.

VSD: How do you think that the machine-vision market differs in different national or international regions?
Munster: Typically we stay with North America. From an integrator or machine-builder aspect we see regulations as about the only major difference. Several of the principles and technologies are common across the world. Differences from national and international regions have more to do with understanding regulations and certifications than key differences in technologies or their application. It is much like going from plant to plant—each customer has its own set of requirements and standard practices.

VSD: What new markets and technologies do you see for your company in the future?
Munster: The company is growing into more of a service-oriented company rather than a machine builder or system integrator. We have found that we can serve more markets and customers by providing them with “vision experts” as opposed to providing a single machine or camera installation. I see us growing to meet the demand for applications that use PC-based systems. We have geared up to service and supply this technology in the future. Our value to customers is being able to solve their machine-vision problems regardless of platform. If we can come in and work on any type of system they have, we will always add value.