tag:blogger.com,1999:blog-43404303289170988322024-03-05T22:23:43.335-06:00VSD Business ViewsVSD Business Viewshttp://www.blogger.com/profile/05543422355312769690noreply@blogger.comBlogger14125tag:blogger.com,1999:blog-4340430328917098832.post-80747251970988476872009-09-25T16:46:00.009-05:002009-09-25T17:12:59.285-05:00Production Necessity<p><div class="blogmeta"><img src="http://images.pennnet.com/vsd/blogs/mitsudo-1-092509.jpg" align="left" />Jun Mitsudo describes advances in semiconductor inspection enabled by machine-vision algorithms, sensors, and processors</div><br><br><span class="blogpreview">Jun Mitsudo holds a PhD in 3-D shape measurement from Ritsumeikan University (Kyoto, Japan) and is currently assistant manager of the Research and Development Center of Canon Machinery (Kusatsu, Japan). He has been involved with machine-vision technology since the late 1990s.</span><p></p><p><br /><strong><em>VSD:</em></strong> What is the mission of Canon Machinery in designing and building machine-vision systems for end users? Which industries do you serve?</p><p><br /><strong>Mitsudo:</strong> Canon Machinery consists of two business divisions: one that develops machines for factory automation and another that builds die-bonding machines for semiconductor test and assembly. Canon is the largest manufacturer of these machines in Japan and fourth worldwide.<br />Being committed to investment in research and development in semiconductor production technology, we realize that machine-vision technology is a necessity. Indeed, because semiconductor production equipment is always increasing in complexity, the number of cameras required per machine is becoming larger each year. These cameras are used in a number of automated machine-vision processes including high-accuracy alignment, part recognition, part identification, and optical character recognition (OCR) applications.</p><p><br /><strong><em>VSD:</em></strong> What are end users requiring from Canon Machinery in the design of new systems?</p><p><br /><strong>Mitsudo:</strong> In factory automation systems, many different features are required that can only be produced at a reasonable cost by closely collaborating with end users. However, in the development of automated die-bonding equipment, the most important criterion is the throughput of the system. To achieve the highest possible throughput, many different technical factors such as speed, accuracy, and robustness need to be considered.</p><p><br />In addition, machine operators must configure these systems as quickly as possible. This is especially important since semiconductor manufacturing is now being performed in developing countries, where an easy-to-use operator interface is critical to the manufacturer's success. In future, these sophisticated operator interfaces will take advantage of different types of sensing technologies including machine vision to detect the status of a system and inform the operator accordingly.</p><p><br /><strong><em>VSD:</em></strong> What technologies and components do you use in these applications?</p><p><br /><strong>Mitsudo:</strong> Depending on the type of application, the best fitting components that address the different requested features andspecifications of each machine are individually chosen on a case-by-case basis. Because semiconductor devices differ in size, die-bonding machines are required to accommodate many different types. Indeed, the smaller the size of the device, the greater the required throughput of the system.</p><p><br />For this reason, CMOS cameras with programmable regions of interest (ROIs) are especially useful since these ROIs can be dynamically changed depending on the size of the individual IC. These types of cameras also eliminate the necessity to use relatively expensive zoom lenses.<br />To perform image analysis, we use Halcon from MVTec Software (Munich, Germany; <a href="http://www.mvtec.com/">http://www.mvtec.com/</a>) and create our own features based on the library. In the past, we developed our own image-processing hardware or bought off-the-shelf image-processing boards. However, in late 1990, the processing power of the PC increased dramatically and after an extensive evaluation we selected Halcon as our software package of choice.</p><p><br /><strong><em>VSD:</em></strong> What developments in embedded computing, GPUs, multicore CPUs, and multicore DSPs do you see? How will these technologies affect hardware development and how will system designers incorporate these developments?</p><p><br /><strong>Mitsudo:</strong> Of the different types of hardware currently available, perhaps graphics processing units (GPUs) are the most important. The high level of data parallelism used in these devices makes them an interesting alternative to general-purpose CPUs, especially in image-processing applications where very large images must be processed at high speeds.</p><p><br />For this to occur, however, system designers must have an intimate knowledge of computer architectures, algorithms, signal processing, optics, and mechanical design. In current die-bonding applications, newer algorithms are required to replace gray value edge-based template matching, and we expect such algorithms to be ported to GPU-based machines to increase their speed.</p><p><br /><center><img src="http://images.pennnet.com/vsd/blogs/levphoto092509.jpg" align="center" /></center><center><em>Canon Bestem-D02 is a multipurpose die bonder with a bonding speed of 0.29 s/cycle. The bonder incorporates CMOS image sensors with programmable ROI imaging. Image analysis is performed using Halcon from MVTec and a library customized by Canon Machinery.</em></center><p></p><p><br /><strong><em>VSD:</em></strong> What algorithms and specific software developments do you see emerging in the next five years?</p><p><br /><strong>Mitsudo:</strong> Different algorithms for 3-D pose calculation and 3-D shape reconstruction must become easier to integrate and maintain. Although these technologies are already practical, their use is limited due to limited acceptance by system designers. In the future, however, sophisticated software interfaces will make such software much easier to use.</p><p><br /><strong><em>VSD:</em></strong> What could vision component manufacturers do to make your job easier?</p><p><br /><strong>Mitsudo:</strong> In industrial machine-vision systems, the introduction of high-end machine-vision tools for template matching, caliper measurement, and blob analysis has made the development of die-bonding machines much easier. As these features migrate to smart vision sensors, they will become more practical and more widely used on the factory floor.</p><p><br />Other functions such as the fast Fourier transform (FFT), feature point extraction, calibration tools, neural network, and support vector machines (SVMs) are also being incorporated into many off-the-shelf software packages. As system designers, we are committed to providing end users with the best solutions by combining these elemental technologies.</p><p><br />For this, we must test the feasibility of use of each function and this requires an enormous amount of time. Single software packages that incorporate all of these functions therefore prove most valuable.</p><p><br />Because we incorporate ROI processing of CMOS cameras, we can dynamically change image-acquisition parameters to search for any specialized ROI within the image. Because this requires sending commands to the cameras continuously, standard digital interfaces such as Camera Link, FireWire, or GigE are useful in easing the setup of these types of cameras in semiconductor inspection applications.</p><p><br /><strong><em>VSD:</em></strong> In which industries do you see the most growth? In which geographic areas?</p><p><br /><strong>Mitsudo:</strong> Alternative energy sources have found increased popularity, especially after the price of oil increased to over $140 per barrel. We see this trend continuing with developers looking to produce automated systems for the inspection of solar wafers, solar cells, solar panels, and compact rechargeable batteries.</p><p><br /><strong><em>VSD:</em></strong> What kinds of new applications for machine vision do you expect to emerge? What new software, components, and subsystems will be needed?</p><p><br /><strong>Mitsudo:</strong> Although many newer machine image-processing algorithms offer high potential, they typically cannot overcome the cost and speed requirements of die-bonding applications. However, looking at future innovations in systems based on DSPs, GPUs, multiple CPUs, or FPGAs, it is likely that such algorithms may soon become practical.</p><p><br />In future, we hope to deploy systems that automatically detect the multiple processing resources available on a system and combine them efficiently for different processing tasks. These systems may perform functions such as point processing and neighborhood operations on an FPGA and perform other functions using a distributed computing system consisting of multiple GPUs or multicore CPUs. From a user's perspective, the use of this hardware must be transparent.<br /></p>Pennwell Blogs Administratorhttp://www.blogger.com/profile/15757232455847950283noreply@blogger.com0tag:blogger.com,1999:blog-4340430328917098832.post-89251656295011081872009-05-06T16:57:00.011-05:002009-05-07T13:24:05.002-05:00Toward a Machine-Vision Benchmark<div class="blogmeta"><img src="http://images.pennnet.com/vsd/blogs/bveckstein050609.jpg" align="left" />Following the publication of <i>Vision Systems Design</i>'s proposal in November 2008*, Wolfgang Eckstein shows how a machine-vision benchmark could be realized</div><br /><br />To develop a successful benchmark for a machine-vision or image-processing system, it is necessary to understand the purpose of benchmarking. Although information about other components such as illumination, cameras, or frame grabbers may be required, it should not be the aim of a vision benchmark to evaluate this hardware.<br /><br />Any successful machine-vision benchmark (MVB) should evaluate only the software and how it performs on various types of hardware. Results should be presented as they relate to whether a standard CPU or a GPU is used. Having said this, an MVB should not be limited to software packages running on PCs but should also evaluate how image-processing solutions perform on open systems, embedded systems, and smart cameras.<br /><br />The intention of any MVB should be to bring more transparency into the market for vision software and vision systems. It should enable vision-system users to determine more easily which software is most suitable for the requirements of a given application.<br /><br />The aim of developing a benchmark should not be to compare single methods such as the execution time of a Sobel filter but to evaluate how well an application can be solved with the software. Additionally, a single benchmark should focus not only on the speed of such applications but also their accuracy and robustness.<br /><br />This kind of benchmark can be accomplished by supplying machine-vision and image-processing vendors with a set of one or more images stored as image files -- together with a description of the images and the benchmark task.<br /><br />To develop this type of benchmark, a company or an organization could specify the rules and benchmarks, perform them, and publish the data and results. As a second option, experts within the vision community could propose such rules, which would then be edited by an unbiased third party or by an MVB consortium.<br /><br />Based on these rules, single benchmarks could be offered by different manufacturers and added to an overall MVB. Everyone in the vision community could then download the MVB and perform the benchmarks. Or the benchmarks could be hosted by a neutral organization, such as the European Machine Vision Association (EMVA) or the Automated Imaging Association (AIA).<br />In practice, the second option is preferable since the MVB would not be controlled by a single company but would be open to every manufacturer. Furthermore, this approach would facilitate the development of an extensible MVB and, because the results would be visible to the whole community and to end users, every manufacturer would have a vested interest in ensuring that the MVB is up to date by using their latest software. This would ensure the MVB remains viable and always contains relevant information.<br /><br /><strong>Rules for a benchmark</strong><br />In the development of an MVB, certain rules first need to be established. This could include a description of a task to be solved and how the benchmark data was generated.<br /><br />Benchmarks would be chosen from classical fields of image processing, like blob analysis, measuring, template matching, or OCR. Such benchmarks require a general statement of the task to be accomplished—without restricting the selection of operators. Alternatively, a specific -- but widely needed -- feature of a tool should be analyzed, such as the robustness of a data code reader that is used to read perspectively distorted codes.<br /><br />Finally, a benchmark must specify how the data used are generated -- whether they were generated synthetically (or modified) or whether the image used was captured from a camera. For general documentation purposes, it would be useful to specify further data such as the optics and camera used for acquiring the test images.<br /><br />In addition to data, there must be a clear description of the task that must be solved. It is important that the solution is not limited and that any suitable software can be used.<br /><br />Benchmark results must specify which information was used to solve the task. For example, it must be clear whether the approximate location of an object or the orientation of a barcode was used to restrict the search for a barcode within an image, because these restrictions influence speed and robustness.<br /><br /><center><img src="http://images.pennnet.com/vsd/blogs/bvfig050609.jpg" /></center><center><em>MVTec proposes a number of benchmarks, each of which consists of a set of image sequences. Each sequence tests a specific behavior of a method. Within each sequence the influence of a "defect" is continuously increased. For example, in template matching, a sequence of a PCB position could be generated by successively changing the distance to check for robustness against defocus.</em></center><br /><br />To motivate many companies and organizations to perform the MVB, it is important that the results be transparent. To accomplish this, each manufacturer or organization must show the specific version of the software that was used, the hardware that the software was run on, and the benchmark's execution time.<br /><br />Various methods of image processing also require the tuning of parameters used within a specific software package. Since these parameters might differ from the default values, they must also be specified. Optional information could also include the code fragment used to solve the benchmark task. This would allow users to learn more about the use of a given system and to perform the same test.<br /><br /><strong>How to perform a benchmark</strong><br />After developing the MVB, the benchmark data and its description should be made freely available. Based on these benchmarks, each manufacturer can develop optimal solutions, perform them, and provide the results. After checking whether the rules are fulfilled for each specific task, the results would then be tabulated and be made freely available to others to cross-validate the published data.<br /><br />To begin the development of an MVB, these single benchmarks should be easy to understand, have clear semantics, cover typical machine-vision tasks, and allow an easy comparison of vision systems.<br /><br />MVTec proposes a number of benchmarks (see below), each of which consists of a set of image sequences. Each sequence tests a specific behavior of a method. Within each sequence the influence of a "defect" is continuously increased. In template matching, an original image of a PCB could be generated and then successively defocused to provide a specific image sequence (see figure). The quality of specific software can then be measured by the number of images that can be processed correctly. The tests would check the speed, robustness, and accuracy of each application task.<br /><br /><center><img src="http://images.pennnet.com/vsd/blogs/bvsidebar050609.jpg" /></center><br /><br />For each test sequence, typically 20-40 images of VGA resolution would be required. Since one image typically has a size of 200 kbytes using, for example, the PNG format, this results in a total size of about 500 Mbytes for all the benchmarks listed.<br /><br />MVTec would offer these test images together with the appropriate task descriptions, if a neutral organization such as the EMVA or the AIA would be willing to host it. Besides this, MVTec invites other manufacturers and users to an open discussion to bring the idea of an MVB forward to increase transparency in the machine-vision market.<br /><br /><strong>Wolfgang Eckstein</strong> is managing director of MVTec Software, Munich, Germany; <a href="http://www.mvtec.com/">http://www.mvtec.com/</a>.<br /><br />* "Setting the Standard: Despite the myriad machine-vision software packages now available, there is yet no means to properly benchmark their performance," <em>Vision Systems Design</em>, November 2008, pp. 89-95.Pennwell Blogs Administratorhttp://www.blogger.com/profile/15757232455847950283noreply@blogger.com1tag:blogger.com,1999:blog-4340430328917098832.post-33336508767855529452009-04-09T16:48:00.009-05:002009-04-09T17:36:31.261-05:00Signal Architecture<p><div class="blogmeta"><img src="http://images.pennnet.com/articles/vsd/thm/th_314525.jpg" align="left" />Shuvra Bhattacharyya explores how emerging hardware platforms enable more advanced software for image-processing applications</div><br /><p></p><p><br /><div><span class="blogpreview">Shuvra Bhattacharyya is a professor in the Department of Electrical and Computer Engineering, University of Maryland at College Park, and holds a joint appointment in the University of Maryland Institute for Advanced Computer Studies and an affiliate appointment in the Department of Computer Science. He received his BS from the University of Wisconsin at Madison and PhD from the University of California at Berkeley.</span></div><br /><p></p><p><br /><strong>VSD:</strong> <em>Could you provide us with some background information on your experience?</em></p><p><br /><strong>Bhattacharyya:</strong> My research interests include architectures and design tools for signal-processing systems, biomedical circuits and systems, embedded software, and hardware/software co-design. Before joining the University of Maryland, I was a researcher at Hitachi America Semiconductor Research Laboratory (San Jose, CA, USA) and a compiler developer at Kuck & Associates (Champaign, IL, USA). I'm presently the chair of the IEEE Signal Processing Society technical committee on design and implementation of signal-processing systems. </p><p><br />Books that I have co-authored or co-edited include <em>Embedded Multiprocessors: Scheduling and Synchronization</em> (second edition to be published by CRC Press in 2009); <em>Embedded Computer Vision</em> (Springer, 2008); and <em>Memory Management for Synthesis of DSP Software</em> (CRC Press, 2006).</p><p><br /><strong>VSD:</strong> <em>Which aspects of image processing interest you? What current research are you or your students pursuing?</em></p><p><br /><strong>Bhattacharyya:</strong> My research group at the University of Maryland--known as the Maryland DSPCAD Research Group--is focused on design methodologies and CAD tools for efficiently implementing DSP systems.</p><p><br />The objective of our work in the area of image processing is to develop programming models that capture the high-level structure of image-processing systems. We are also looking at analysis techniques for deriving implementation properties such as memory requirements and processing throughput from these representations. And we are looking at synthesis techniques for deriving optimized implementations on different kinds of target architectures, including programmable DSPs, FPGAs, and embedded multiprocessors.</p><p><br />The programming models we work with are based on dataflow principles and specialized to the area of signal processing, including applications that process signals from image, wireless communication, audio, and video streams. By applying specialized programming models, our methods are able to efficiently expose and exploit high-level computational structure in signal-processing applications that is extremely time consuming or impossible to derive from general-purpose program representations. </p><p><br />Some particular challenges in applying dataflow-based design methodologies to image-processing systems include incorporating multidimensional data into the formal stream representations used by the programming models and managing the large volumes of data and high performance requirements. In addition, increasing use of image processing in portable, energy-constrained systems makes it important to incorporate methods for aggressively optimizing power consumption while maintaining adequate image-processing performance and accuracy. </p><p><br />Two image-processing domains that I have been specifically involved in developing new design methods and tools for are distributed networks of smart cameras and medical image registration. The first is through an NSF-sponsored collaboration with Rama Chellappa (University of Maryland) and Wayne Wolf (Georgia Institute of Technology); and the second is through a collaboration with Raj Shekhar and William Plishker, who are jointly affiliated with the schools of Engineering and Medicine at the University of Maryland. </p><p><br /><strong>VSD:</strong> <em>How do you think this research will impact future generations of image-processing and machine-vision systems?</em></p><p><br /><strong>Bhattacharyya:</strong> I think that research on dataflow programming environments and tools will allow designers of these future systems greater flexibility in experimenting with different kinds of embedded processors and heterogeneous multiprocessor platforms. Most dataflow-based tools for signal processing operate at a high level of abstraction, where individual software components in conventional programming languages (e.g., C or Verilog/VHDL) are selected based on the back-end tools associated with the targeted platform.</p><p><br />These platform language components are interfaced through dataflow-style restrictions and conventions that allow for the inter-component behavior to be analyzed and optimized using formal dataflow techniques. The output of these tools is an optimized, monolithic implementation in the selected platform language; or, for heterogeneous platforms, the output is a set of multiple, cooperating platform language implementations. This output can then be further processed by the toolchain (e.g., the C compiler or HDL synthesis tools) associated with the target platform.<br />This kind of design flow provides a number of advantages that are promising for next-generation image-processing and computer-vision systems. First, the emphasis on component-based design--where components adhere to thoroughly and precisely defined interfacing conventions--facilitates agile, high-productivity, modularity-oriented design practices.</p><p><br />Second, the use of dataflow as effectively a source-to-source framework in terms of the platform language provides for efficient re-targetability across different kinds of platforms, and allows designers to leverage the often highly developed, and highly specialized back-end tools of commercial embedded processing platforms. This provides a complementary relationship between the high-level design transformations, which are handled effectively by dataflow tools, and low-level (intra-component) optimizations and machine-level translation, which are best handled by platform tools.</p><p><br />A general challenge facing this kind of two-level design methodology is the overhead of inter-component data communications, which can sometimes dominate performance if it is not handled through a more integrated design flow. I expect that designers and tool developers will continue to make advances in this direction by using techniques for carefully controlling the granularity of components, using block processing within components, and exploring new ways to model and optimize the mapping of component interfaces into hardware and software.</p><p><br /><center><img src="http://images.pennnet.com/vsd/blogs/vsdbvapr09.jpg" width="300" /></center><br /><center><i>Dataflow graph that represents an accelerator for evaluating polynomials. Each circle or oval represents a computational operation; the arrows that connect operations specify how data passes between operations. Annotations specify certain properties about the rates at which the incident operations produce and consume data. The operation labeled "controller" (broken out on right) has a hierarchical "nested" dataflow representation. (Adapted from Plishker, W. et al, Proc. International Symposium on Rapid System Prototyping, pp. 17-23, Monterey, CA, June 2008).</i></center><p></p><p><br /><strong>VSD:</strong> <em>What developments in FPGA design will affect hardware developments and how will system designers incorporate them?</em></p><p><br /><strong>Bhattacharyya:</strong> I think that support for heterogeneous multiprocessing in FPGAs--both in terms of rapid prototyping and developing high-performance implementations--will contribute significantly to the increased use and customization of such multiprocessor technologies in image-processing systems. Modern FPGA devices provide valuable platforms on which designers can experiment with different multiprocessor architectures, including different combinations of processing units and different kinds of networks for inter-processor communication. This opens up a valuable dimension of the design space that must be explored more deeply to achieve the most competitive implementations of next-generation applications. Both "hard" and "soft" processor cores play useful roles in FPGA-based design methodologies and applying these methodologies to develop embedded multiprocessor systems. Although soft cores incur significant penalties in terms of performance and resource utilization, they are relatively easy to configure in different ways to experiment with different numbers and kinds of processors, and get an idea of how an application will map onto and scale with different system architectures.</p><p><br />This kind of rapid prototyping approach allows designers to develop much better intuition about system architecture alternatives before investing large amounts of specialized effort developing or applying a specific multiprocessor platform. On the other hand, hard processor cores, together with signal processing accelerators and other kinds of specialized IP blocks, provide valuable frameworks for accelerating image-processing applications in performance-oriented production systems.</p><p><br /><strong>VSD:</strong> <em>Recent software developments in image processing include pattern recognition, tracking, and 3-D modeling. What algorithms and specific software developments do you see emerging in the next five years?</em></p><p><br /><strong>Bhattacharyya:</strong> I expect an accelerated use of heterogeneous platforms for image-processing software development, such as platforms involving combinations of GPUs and CPUs, or multiprocessors and FPGA-based accelerators. Heterogeneous platforms allow for more streamlined implementation, including exploitation of different forms and levels of parallelism in the application, and efficient integration of control and data processing.</p><p><br />The use of heterogeneous platforms, however, is conceptually more difficult, and the associated design flows are more complex. I expect increased attention to and application of frameworks that are aimed at application development on heterogeneous multiprocessor platforms. Some examples of emerging frameworks in this space are the open computing language (OpenCL), which is geared towards platforms that integrate GPU and CPU devices, and openDF, which is a dataflow-based toolset geared towards platform FPGAs and multicore systems. </p>Pennwell Blogs Administratorhttp://www.blogger.com/profile/15757232455847950283noreply@blogger.com0tag:blogger.com,1999:blog-4340430328917098832.post-61987404994763548302008-12-16T13:57:00.024-06:002009-03-17T10:48:37.071-05:00Process Control<div><br /><p><span class="blogpreview"><img src="http://images.pennnet.com/vsd/blogs/busviewdec08fetch.jpg" align="left" /><br />A discussion with Lou Fetch, Performance Automation<br /> Lou Fetch is with Performance Automation (Loveland, OH, USA; <a href="http://www.performanceautomationinc.com/">www.performanceautomatio ninc.com</a>), which was established in 2005. Editor in chief Conard Holton talked to him about inspection systems and implementing SPC practices.</span></p></div><br /><br /><p><img src="http://images.pennnet.com/vsd/blogs/busviewdec08fetch.jpg" align="left" /><br />A discussion with Lou Fetch, Performance Automation<br /> Lou Fetch is with Performance Automation (Loveland, OH, USA; <a href="http://www.performanceautomationinc.com/">www.performanceautomatio ninc.com</a>), which was established in 2005. Editor in chief Conard Holton talked to him about inspection systems and implementing SPC practices.</p><br /><br /><p><b>VSD:</b> <em>Please describe your company and its services. What is the origin of your company?</em><br /> <br /><b>Fetch:</b> Performance Automation is a system integrator, specializing in machine-vision inspection solutions. Our roots came from the factory automation distribution business. As a result, we recognized that our customers had a critical need for machine-vision integration to improve their productivity and their competitiveness in a changing world economy.<br /> <br /><b>VSD:</b> <em>What technologies and components do you use for your applications? How often do you evaluate competing technologies?</em><br /> <br /><b>Fetch:</b> Lately, the majority of our applications have been PC-based solutions with higher-resolution cameras and with one axisof motion. We have used motion to accurately position the inspection item under the camera; other times we will present the camera to the inspection item. We have begun moving away from FireWire and using more GigE cameras. National InstrumentsVision Builder software has solved several of our machine-vision and control applications, but we have also used custom C# programming to solve specific applications. Using new technology for the first time presents a certain amount of risk to the overall success of the project. Customer applications have challenged us to expose new technologies or techniques. New approaches could also translate to a more robust solution. Using established vendors with a solid reputation for support, quality products, and service will limit that risk.<br /> <br /><b>VSD:</b> <em>How do you approach a new application? Do you work with OEMs or other systems integrators?</em><br /> <br /><b>Fetch:</b> Some project opportunities are with clients using machine vision for the first time or who are relatively new to the technology. We take the time to walk them through the process of a successful application. It is essential for us to take a customer sample, acquire an image, and use basic vision tools to demonstrate the feasibility of the application, and assess and then discuss the risk. Our projects also provide for customer training not only for the delivered solution but how to apply machine vision in general to ensure that they have basic knowledge for simple troubleshooting of the installed project. In addition, the training allows the customer to better identify other practical applications for machine vision in their operation. In our fixed-cost project proposals, we often provide a 3-D model of the system layout with a detailed description of the deliverables. We refer to this approach as "Concept to Solution." 3-D models of recent projects have helped the client visualize what the solution will look like. A set of fabrication drawings can then be generated from these models (see Fig. 1).<br /> <br /><br /><img src="http://images.pennnet.com/vsd/blogs/bvdec08fig1.jpg" width="300" /> <br /><a href="http://images.pennnet.com/vsd/blogs/bvdec08fig1.jpg" target="_blank">Click here to enlarge</a><br /><br /><img src="http://images.pennnet.com/vsd/blogs/bvdec08fig1b.jpg" width="300" /> <br /><a href="http://images.pennnet.com/vsd/blogs/bvdec08fig1.jpg" target="_blank">Click here to enlarge</a><br /><br />FIGURE 1. A test station, first developed in a CAD drawing, is used to inspect molded automotive terminals. The customer uses machine vision for defect inspection previously done manually. An SPC application will be used to determine the capability of their process in more detail. They currently do not have an efficient method to gather dimensional data.<br /><br />We are not currently using CAD with motion or to model lighting. However, at a technical conference last August we saw a presentation on the use of software to model machine-vision lighting and lenses, and we are curious to learn more. It is not uncommon for seemingly simple applications to take hours or more worth of effort to work out the light and lens details. Software tools would certainly be a timesaver.<br /><br /><b>VSD:</b> <em>How do you design your systems for OEM product obsolescence?</em><br /> <br /><b>Fetch:</b> To protect our customers' interests, we use commercially available products from vendors that are industry leaders. Commercially available products will also give the customer vendor options. For instance there are several vendors that can supply FireWire or GigE cameras. Major vendors also often have an upgrade path when a product does become obsolete.<br /> <br /><b>VSD:</b> <em>In which areas do you see the most growth? What are users demanding from you in the design of new systems?</em><br /> <br /><b>Fetch:</b> One common element of recent projects has been higher-resolution cameras and statistical process control (SPC) reporting. We use standard SPC tools to show our clients what their process is capable of producing before we set pass/fail machine-vision criteria. This helps the client understand where to set machine-vision limits and ultimately assists them in improving their process. This also allows our clients to identify process improvements and then measure and quantify those improvements.<br /><br /><img src="http://images.pennnet.com/vsd/blogs/bvdec08fig2.jpg" width="300" /> <br /><a href="http://images.pennnet.com/vsd/blogs/bvdec08fig1.jpg" target="_blank">Click here to enlarge</a><br /><br />FIGURE 2. Standard SPC tools developed by Performance Automation show clients what their process is capable of producing before pass/fail machine-vision criteria are set. This helps understand where to set machine-vision limits, identify process improvements, and measure and quantify results. <br /><br />Our typical SPC GUI will have main tabs that display the current image, last failed image, and a statistics tab. The statistics tab has additional tabs that display information about a process variable. Each process variable is recorded for common process statistics such as Avg., Std. Dev., Max and Min values, Median, and capabilities CP and CPK. The second tab for each process variable has a histogram of the measured variable. After the major variables are identified and the customer has inspected a suitable sample size of parts, we then turn the data into information they can use. This statistical process information can now be used to set the accept/reject limits for the vision tools. One customer has used this information to improve their molding process.<br /><br /><b>VSD:</b> <em>In which geographical areas do you work?</em><br /> <br /><b>Fetch:</b> We are a machine-vision integrator located in southern Ohio. Our primary marketing area is about a 500-mile radius.Although, for our global clients, we have on occasion traveled outside that radius, we have not executed a project outside of the US yet.<br /> <br /><b>VSD:</b> <em>What new markets and technologies do you see for your company in the future?</em><br /> <br /><b>Fetch:</b> Honestly, there is currently enough business in the traditional markets that have been generating business for us. There are still plenty of applications to inspect molded and machined parts and labels, for instance. One interesting trend of late has been inquiries for machine vision in analytical applications. One professor asked us about using machine vision to monitor the activity of fish embryos in a Petri dish. Another client asked about using machine vision to quantify the amount of a compound that fluoresced under UV light. An emerging technology that is intriguing to us is time-of-flight cameras. Also, thermal imaging offers opportunities in certain markets.Pennwell Blogs Administratorhttp://www.blogger.com/profile/15757232455847950283noreply@blogger.com0tag:blogger.com,1999:blog-4340430328917098832.post-40654814647180486422008-10-16T10:24:00.004-05:002008-10-16T14:19:04.740-05:00A discussion with Ignazio Piacentini, ImagingLab<div class="blogmeta"><img src="http://images.pennnet.com/vsd/blogs/bvpiacentini.jpg" align="left" /><br />A discussion with Ignazio Piacentini, ImagingLab</div><p><span class="blogpreview">Ignazio Piacentini is director of ImagingLab (Lodi, Italy; www.imaginglab.it). He has a BSc in nuclear engineering (Milan, Italy) and an MSc degree in digital systems and instrumentation (Polytechnic of Central London, UK). He spent many years designing control and data acquisition systems for the thermonuclear fusion research community before joining the machine-vision industry. Editor in chief Conard Holton discussed trends in machine vision systems, software, and integration with him.</span> </p><p><br /><br /><br /><b>VSD</b>: What sort of vision systems or services does ImagingLab provide? What is the origin of your company?<br /><b>Piacentini</b>: ImagingLab is a small high-tech company whose mission is to offer its knowhow to system integrators, machine builders, and end users with the need to adopt innovative technologies in the field of machine vision and robotics, minimizing their investments and shortening their learning curve. Our core business is machine vision for industrial robotics, with what I consider a difference from most robotics companies: We have arrived at robotics from vision. Therefore, the design of our systems is vision centric with a very tight integration of the robotics and vision software, essentially leading to a single programming/configuration platform.<br /><br />We are an atypical integrator, more akin to a design/engineering bureau. We work in close partnership with other companies (typically much larger) from the early design phase to a validation prototype. The actual construction of "machines" is remanded to our partners, but we follow closely the final engineering phase, the deployment of the machines to the factory floor, and the introduction of improvements arising from the everyday operation of the end users. We also provide the necessary technology transfer to facilitate the adoption of robotics and vision to both the machine manufacturers and to the end user.<br /><br />I had the first glimpse of machine vision in the early 1980s, while working at the JET Project (Culham Laboratories, Abingdon, UK), a large thermonuclear fusion experiment. The personal computer revolution had not yet occurred and the vision algorithms were running slowly--an oxymoron?--on a PDP 11. In 1991 I left the research community and moved to machine vision. In 1995-96 I was directing Graftek Italy and took part in the discussion and negotiation between Graftek France and National Instruments (Austin, TX, USA; <a href="http://www.ni.com">www.ni.com</a>) for the acquisition of a LabVIEW-based image-processing library, which led to the development of the machine-vision products line in the following years. After a spell with NI as a European business development manager for machine vision, in 2004 I left to start ImagingLab.<br /><br /><b>VSD</b>: What technologies and components do you use in machine-vision-related applications? How often do you evaluate competing technologies?<br /><b>Piacentini</b>: Software plays the largest role in our system development and is obviously based on LabVIEW and the related vision library. With a company start in 2004, the choice was to go fully digital with the adoption of IEEE 1394 (FireWire) high-resolution cameras, while Camera Link has been used for a few high-end applications. In terms of CPUs, LabVIEW and its Real Time version allow us to develop conveniently under Windows [XP Pro since we are not great fans of Vista], while deploying the application software on a number of different targets from industrial-panel PCs running XP embedded to a number of RTOS boxes and smart cameras.<br /><br />GigE has slowly trickled into our applications but has not yet significantly offset FireWire. The communication with the robots and machine interface is based on a variety of standard protocols ranging from TCP-IP over Ethernet to Modbus. Ether-CAT will facilitate the use of remote deterministic I/Os.<br /><br />3-D vision is becoming increasingly important, especially in conjunction with robotics, and we have very recently developed a LabVIEW tool-kit, under contract with SICK-IVP (Linkoping, Sweden; <a href="http://www.sickivp.com">www.sickivp.com</a>), for their Ranger series of laser-scanning cameras.<br /><br />Concerning competing technologies, we keep a vigilant eye on all the novelties appearing on the machine-vision market, often evaluating new products with hands-on trial, while we are rather cautious in their immediate deployment.<br /><br /><br /><b>VSD</b>: In which areas of the industry do you see the most growth? What are users demanding from you in the design of new systems?<br /><b>Piacentini</b>: We are heavily biased towards machine vision for robotics. Roughly 100,000 robots are sold around the world every year. This number is increasing steadily. The percentage of robots making use of vision is also increasing. European manufacturing companies are facing strong competition from China and India and can only compete by adopting more and more concepts of flexible manufacturing, which in turn leads to more advanced automation based on robotics and machine vision.<br /><br />Our systems are targeted to the manufacturers of machines that perform some kind of automated production or assembly cycle. Culturally, these companies have a strong mechanical background, which has to move towards the more recent world of mechatronics. The success of our applications/systems is linked to the overall performance of the machines in terms of cycle time but also in the ease of use of the technologies we are offering. "Ease of use" encompasses a number of issues: simple end-user interface, tools to facilitate the commissioning and startup of the machine, self-calibration tools [there is a need to correlate the "looseness" of pixels deriving from optical distortion of various kinds to the more rigid space of robot coordinates], and remote diagnostics and maintenance.<br /><br /><b>VSD</b>: How will OEM components targeted towards machine-vision applications have to change to meet future needs?<br /><b>Piacentini</b>: I would like to see some attempt to standardize the data sheets of all the components that are part of the acquisition chain, from sensors and cameras to illumination devices and optics. Comparing the merits/quality of individual components is today rather difficult and can hardly be done based on the published data. Think about, for instance, how to compare the sensitivity and noise level of two cameras from different vendors or how to evaluate the uniformity and intensity of illumination of an array of LEDs at a given distance.<br /><br /><b>VSD</b>: Could you discuss the machine-vision market in Italy and compare the machine-vision markets in different industry segments in Europe?<br /><b>Piacentini</b>: In the contest of the European Union, Italy is fourth in terms of gross domestic product, but is second after Germany in the production of "manufacturing machines." Seventy percent of these machines are exported. This obviously has an influence on the machine-vision market, machine vision being used to automate the production cycle as well as a quality control instrument. There are also other market segments like food production and packaging that represent a potentially large market share of the machine-vision market.<br /><br />An historical challenge for machine-vision companies active in the Italian market is the fragmentation of the market itself and the inherent small size of the machine-vision system integrators, which fall well below the European average of 38 people per company. ImagingLab has a team of only eight people, yet is considered a medium-sized integrator! More information on the peculiarities of the Italian machine vision market can be found on a presentation I gave during the 2008 EMVA conference in Berlin.<br /><br /><b>VSD</b>: What machine-vision algorithms and specific software developments do you see emerging in the next five years?<br /><b>Piacentini</b>: Algorithms for 2-D imaging have reached a reasonable level of maturity and completeness, and the fairly recent addition of geometric pattern matching to the more conventional one based on normalized cross-correlation has vastly improved robotized pick-and-place operations. The machine-vision libraries available from various vendors offer more than just the pure algorithmic content--they also offer a very good level of user interface, simplifying the understanding and the interpretation of the results.<br /><br />With the advent of 3-D cameras capable of generating spatially calibrated 3-D images (either as a cloud of points or correlating the z dimension to the grayscale level), there is a lot to be done to reach the same level with a 3-D vision library. It is sufficient to think of the increase in complexity required for the reliable detection of a specific pattern once perspective and different 3-D positionings in space have been taken into account.<br /><br />Processing speed will also benefit from the increased availability of multicore CPUs, though some rethinking will be required at the algorithmic level to be able to distribute data processing on these new hardware architectures.<br /><br /><br /><b>VSD</b>: What kinds of new applications and industry trends do you expect to emerge in the future?<br /><b>Piacentini</b>: can think of many, but solving and generalizing bin-picking as opposed to palletizing parts is one of the current dreams of many companies involved with flexible manufacturing: an application that can most likely be solved with a careful combination of 3-D and 2-D imaging. I also see machine vision becoming an integral part of robotics and no longer an external add-on, with the possibility of using vision-derived information in the kinematics control loop to improve speed and positional accuracy.<br /><br />As the price/performance of machine-vision systems continues to improve, more and more systems will be deployed, with quality control becoming distributed during the whole process rather than being confined to the end of the production line.Pennwell Blogs Administratorhttp://www.blogger.com/profile/15757232455847950283noreply@blogger.com2tag:blogger.com,1999:blog-4340430328917098832.post-90125787478914602382008-07-30T12:44:00.009-05:002008-07-31T14:20:59.797-05:00Shape of things to come<div class="blogmeta"><img src="http://images.pennnet.com/vsd/blogs/busviewjulynew.jpg" align="left" /><br />A discussion with Rainer Obergrußberger, in-situ</div><p><span class="blogpreview">Rainer Obergrußberger has an extensive background in image processing and analysis and is the managing director of in-situ (Sauerlach, Germany; www.in-situ.de). Editor in chief Conard Holton spoke to him about integrating machine-vision systems and developing innovative products.</span> </p><p><br /><br /><b>VSD</b>: What does your company do; what services does it provide?<br /><b>OBERGRUßBERGER</b>: in-situ is a growing company with more than 20 years of experience in the fields of image processing and machine vision. We specialize in industrial, medical, and scientific applications, offering image-processing systems in a broad range of products, as well as specialized development of hardware and software. Our business units are PC vision, smart-camera applications, and OEM products. In-situ means literally “without change of location.” In our business, in-situ means measurements using noncontact methods.<br /><br /><b>VSD</b>: What is your personal background in the machine-vision industry? <b>OBERGRUßBERGER</b>: After an apprenticeship in the electronic industry, I studied computer science at the University of Applied Sciences in Rosenheim, where part of the course was focused on image analysis. During the last few years of my course I worked as a parttime student for the image-processing lab at the university. I also worked part-time for the company vidisys on software for a tool presetting machine, to capture and measure geometrical shapes of metal-cutting tools with a camera setup.<br /><br />I started at in-situ in 2001, first as an application engineer, later as CTO, and since the beginning of 2007 I am the managing director. Three years ago I started a master’s degree course with focus on computer graphics and image analysis. My master’s thesis was about the analysis of patterns in the biological field, and I was lucky enough to spend eight months at the University of Queensland, Australia, to complete this work.<br /><br /><b>VSD</b>: What technologies and components does in-situ use? How do you evaluate competing technologies?<br /><b>OBERGRUßBERGER</b>: We use a wide range of hardware from different companies, depending on the application we want to develop. Analog cameras are now playing only a minor role in the applications of in-situ. Digital interfaces are more flexible regarding signal transfer and scalability.<br /><br />CMOS cameras still lack sensitivity and image quality, but are usually cheaper than CCD-based hardware and often sufficient. As more and more digital camera types support standard computer interfaces, such as FireWire or Gigabit Ethernet, frame grabbers are often no longer needed. To evaluate competing technologies we must sometimes make lengthy tests with new hardware to discover the benefits and limitations of the new technology. As this can be quite time-consuming and expensive, we always try to rely on an approved hardware for as long as possible. We don’t get on every new technology train that comes along, just because it’s trendy! Usually it takes some time for a new technology to compete with or even replace an old but well-proven one. In general, the competence of a solution provider such as my company, is to decide which technology, including camera, illumination, and frame grabber, best fits the requirements of a customer´s application.<br /><br /><b>VSD</b>: How do you approach a new application? Do you work with OEMs or other system integrators?<br /><b>OBERGRUßBERGER</b>: A new application is always a challenge, especially when there is little or no experience with the customer´s products or business. In my opinion, you should spend as much time as possible exploring what the customer´s application is all about and also thinking about what possibilities exist to provide the best solution. For example, in the beginning I don’t just speak to the boss of a company, but also to the worker who is confronted with a certain quality problem in a production line every day.<br /><br />I find it best to have a look at how the company deals with certain quality problems, and at internal company processes. To fulfill a customer’s requirements, you have to completely understand how the company works. We always start an application by developing a proof of principle. If such a proof doesn’t take longer than one day, we do this for free. It’s the first step to show a potential customer the capabilities of the company. If possible, we try to get test samples covering the complete product spectrum, from good parts to faulty parts.<br /><br />We’ve got various optical setups that we can use to test the provided parts. These include laser scanners, linescan and areascan camera setups with different resolutions and lenses, as well as smart cameras. We don’t develop hardware such as cameras or frame grabbers, we simply buy them from vision component suppliers such as Stemmer Imaging (Puchheim, Germany; www.stemmer- imaging.de) or Cognex (Natick, MA, USA; www.cognex.com). However, it’s a bit different with illumination. One of the most important things for providing a proper vision solution is having the most appropriate illumination concept. If standard illumination can’t satisfy the requirements, we develop special ones ourselves.<br /><br /><b>VSD</b>: How do you design your systems for product obsolescence?<br /><b>OBERGRUßBERGER</b>: Unfortunately, normal obsolescence of a vision system always comes along with the life cycle of the product that the system was developed for. To be able to fulfill further unknown requirements, hardware and software should be flexible to different configurations and new tools.<br /><br />At in-situ, we increasingly try to develop software in a reusable way. A proper camera interface should support many kinds of cameras— those available now and in the future. As we can’t handle the wide range of hardware alone, we use standard software tools such as Cognex Vision Pro or Stemmer’s Common Vision Blox. Software should contain options for data storage, statistics, and an easy-to-use graphical user interface. Quite often it is a trade-off between maximizing flexibility on the one hand and software that is well adapted to the customer’s requirements on the other.<br /><br /><b>VSD</b>: What algorithms and specific software developments do you see emerging in the next five years?<br /><b>OBERGRUßBERGER</b>: Many of the basic ideas of image analysis were founded in the 1970s, including edge detection, template matching, or image filters. However, the algorithms have improved greatly in the last 20 years, and new approaches have been created. Higher computing performance allows better and more complex algorithms to be calculated in real time and, therefore, can be used for machinevision applications.<br /><br />Vectorized pattern-matching has opened new dimensions in pick-and-place applications, as well as in product identification. Twenty years ago such algorithms would have been called “coffee algorithms,” as there would have been enough time to drink a good cup of coffee in the time it took to process a single image! Another good example is the shape-from-shading principle that we picked up three years ago. This also originated in the 1970s. We worked on optimizing hardware and software and are now able to extract 3-D surface data in real time. I think there will be new approaches for better and faster 3-D pattern-matching in the next five years. The “grip in a box” is still hard to solve and probably one of the biggest challenges in robot applications—in this case, a reliable and fast 3-D model finder would revolutionize the possibilities. In addition, tracking has become very important for camera surveillance and automatic guidance systems. Automatic and dynamic traffic control supported by cameras will be the reality in the near future.<br /><br /><b>VSD</b>: How will OEM components targeted toward machine-vision applications have to change to meet future needs?<br /><b>OBERGRUßBERGER</b>: I think that OEM components will have to be packed with intelligence, reliability, and flexibility, and that there will have to be standards. The vision sensor market has been increasing a lot in the last few years. Traceability opened a broad and lucrative market for ID reading and verification. OEM components for complex solutions will always have to be integrated by professional solution providers such as my company, but the vision-sensor market is targeted toward the machine builders with less experience in machine-vision issues. Therefore, vision sensors have to be easy to use, easy to understand, and very stable in different environmental conditions.<br /><br />For interaction between different systems there will have to be standards for communication protocols, for hardware adaption (for example, power supply, cables, and mounting) and for software interfaces. For in-situ, the GenICam standard has already been a large and important step toward a common and generally used camera description language. But it’s not just the components that have to change to meet future needs. In addition, the way in which vision solutions are provided and integrated will have to be increasingly customer-oriented. Vision systems shouldn’t be just a black box in a production line. Customers require adequate training on a system to learn how it works and how they can fix certain problems themselves. Every system, however small and simple it might be, must be properly documented and described to the customer.<br /><br /><b>VSD</b>: What new markets and technologies do you see for in-situ in the future?<br /><b>OBERGRUßBERGER</b>: In the next few years the company wants to establish a new business unit in 3-D surface inspection. After a major modification in hardware and software, we developed a new shape-from-shading technology that allows reconstruction of 3-D surface data from a wide range of materials. With DotScan—a vision system to inspect embossed Braille printing on pharmaceutical folding boxes—we have already shown that this principle works well in an industrial environment. Our patented inline version, called SPARC (surface pattern analyzer and roughness calculator), allows the inspection of surfaces in various fields of view and accuracies down to a few microns on moving parts. In November 2007, SPARC won the Vision Award prize for applied and innovative vision technology at the VISION trade fair in Stuttgart, Germany.<br /></p>Pennwell Blogs Administratorhttp://www.blogger.com/profile/15757232455847950283noreply@blogger.com0tag:blogger.com,1999:blog-4340430328917098832.post-36000186128786184282008-06-23T09:04:00.004-05:002008-06-27T09:43:33.363-05:00Camera Smarts<div class="blogmeta"><br /> <img src="http://images.pennnet.com/vsd/blogs/bradheadshot4.jpg" align="left"><br /> <br> A discussion with Brad Munster, Visionary Technologies<br /></div><p><br /><span class="blogpreview"><br />Brad Munster has a background in electrical engineering and engineering sales and is the owner and president of Visionary Technologies (Holland, MI, USA; www.vis-tech.com). He has been working with smart cameras for more than 11 years. Editor in chief Conard Holton spoke to him about trends in machine-vision systems and service.</span> <p><br /><br /><br /><b>VSD: </b> Please describe your company and its services. <br /><b>Munster: </b> Visionary Technologies is a machine-vision integrator that specializes in the development and deployment of smart vision camera systems. The company provides customers with turnkey inspection machines, retrofits cameras to existing manufacturing lines, and services existing camera systems. I have been in the automation field and with manufacturing companies for 15 years— the first three with a distributor in Australia. We focused on products to serve the mining industry. I later worked as a sales engineer for a high-tech distributor for industrial automation in Michigan and then for a system integrator before starting Visionary Technologies about five years ago.<br /><br /><b>VSD: </b> What technologies and components do you use for your applications? <br /><b>Munster: </b> Visionary Technologies uses smart cameras for 90% of its applications. This is something our customers are more familiar with and therefore not as hesitant to implement. The cameras we use come from manufacturers such as Cognex, DALSA IPD, Keyence, National Instruments, and PPT Vision. We continually evaluate all latest technologies to stay current and provide our customers with the best solution within their budgets. We try to choose the best system for the application and not focus on the name on the side of the camera. Each manufacturer has strengths and weaknesses with its products and algorithms. Also we see customers favoring one camera brand over another, and we provide them with a realistic assessment of which system would be best for their application.<br /><br /><b>VSD: </b> How do you approach a new application? Do you work with OEMs or other system integrators? <br /><b>Munster: </b> Machine vision is often an oversold technology, or customers think it is a magic pill. Many customers believe that the camera can inspect anything within the field of view—and sometimes out of it—and that it should work flawlessly out of the box. From the first meeting we endeavor to walk them through the integration steps and reset them to a more realistic set of expectations. Some customers think that a camera is like other pieces of automation equipment—once it is programmed everything is done. We prepare them for a lot longer debug time. Instead of two to five days we try to have them expect two to five weeks, or longer, depending on the variables. During this time we often find manufacturing issues that the customer did not believe were relevant or possibly did not know about. Some of our largest customers are other system integrators. In today’s market very few companies have the resources to have a person solely dedicated to a specific task such as a machine vision programmer. This is a problem when you are trying to implement machine vision because you need to be working with this technology every day or you will lose or have to relearn skills and techniques. Some system integrators or machine builders may implement a camera system once every three to six months. We are working on six to eight different projects per week. This keeps skills sharp and also exposes us to many different types of applications and therefore increases our knowledge base and techniques.<br /><br /><b>VSD: </b> Recent software developments in image processing include pattern recognition, tracking, and three-dimensional modeling. What algorithms and specific software developments do you see emerging in the next five years? <b>Munster: </b> The company has started to develop its own set of tools and algorithms in any given platform. The software or hardware manufacturer often gives you a great starting place for basic inspections and typical applications, but customers can run into problems when they can’t go beyond standard software tools (see photo). We have developed more advanced tools and techniques that let us ‘go under the hood’ by combining multiple tools to make the customer’s algorithms more robust and error proof. Some of the new software developments will probably be based around better OCR and Data Matrix tools. These are becoming more prevalent, with traceability and accountability.<br /><br /><b>VSD: </b> How do you design your systems for OEM product obsolescence? <br /><b>Munster: </b> Customers should consider the camera system as an asset and therefore keep in mind potential redeployment options in the future. Redeployment and retooling are definitely new buzz words on the manufacturing floor. Many of the smart cameras can be upgraded to newer versions of software and firmware. Sometimes for upfront incremental cost, the customer can purchase a platform that is more versatile for future use rather than for the immediate application. We have redeployed obsolete camera systems for new applications due to customer budget constraints. Obviously we need to determine in advance if this hardware and software will meet the requirements and if it can be integrated efficiently. Sometimes, a current camera system will offer more advanced software tools that will reduce integration time and therefore lower overall cost of the project.<br /><br /><b>VSD: </b> In which areas do you see the most growth? What are users demanding from you in the design of new systems? <br /><b>Munster: </b> Smart camera systems still seem to dominate most factory floors. These systems are becoming easier to use, and the hardware cost continue to go down. While PC-based systems seem to be entering the main stream as customers become more camera savvy, they are still more difficult to deploy and maintain. The bottom line is customers are always demanding the most cost-effective solution to solve their problems. I think they realize more today than they did five years ago that cost of ownership is much more important than up-front cost.<br /><br /><b>VSD: </b> How will OEM components have to change to meet future needs? <br /><b>Munster: </b> I think future-proofing projects is something that will help the longevity of any product. Many of the components we use are very versatile. Vision applications change so much from design to implementation that we need components that can change with the application. <br /><br /><b>VSD: </b> How do you think that the machine-vision market differs in different national or international regions? <br /><b>Munster: </b> Typically we stay with North America. From an integrator or machine-builder aspect we see regulations as about the only major difference. Several of the principles and technologies are common across the world. Differences from national and international regions have more to do with understanding regulations and certifications than key differences in technologies or their application. It is much like going from plant to plant—each customer has its own set of requirements and standard practices.<br /><br /><b>VSD: </b> What new markets and technologies do you see for your company in the future? <br /><b>Munster: </b> The company is growing into more of a service-oriented company rather than a machine builder or system integrator. We have found that we can serve more markets and customers by providing them with “vision experts” as opposed to providing a single machine or camera installation. I see us growing to meet the demand for applications that use PC-based systems. We have geared up to service and supply this technology in the future. Our value to customers is being able to solve their machine-vision problems regardless of platform. If we can come in and work on any type of system they have, we will always add value.Pennwell Blogs Administratorhttp://www.blogger.com/profile/15757232455847950283noreply@blogger.com0tag:blogger.com,1999:blog-4340430328917098832.post-82534353868504566642008-05-12T09:51:00.006-05:002008-05-12T10:03:00.507-05:00Evolution of software-development tools<div class="blogmeta"><br /> <img src="http://images.pennnet.com/vsd/blogs/newdawson_2008-04.jpg" align="left"><br /> <br> Ben Dawson<br /><br />BEN DAWSON is director of strategic development at DALSA IPD, Billerica, MA, USA; <a href="http://www.dalsa.com" target="_blank">www.dalsa.com</a></div><p><br /><span class="blogpreview"></span> <p><br /><br />In the beginning there was the library, and there was a multitude of them. The user interface was the library documentation or a text-based interpreter. Libraries and interpreters impose a large memory load--the amount of information a developer has to remember to use the software. The introduction of graphical user interfaces (GUIs) greatly reduced memory load by presenting image-processing functions and program flow as graphical elements that you can pick without remembering their details.<br /><br />GUIs present functions and program flow control elements as icons or in option lists. They represent functions as icons and program flow control as lines connecting the icons. More recent GUI paradigms include automatic generation of blocks of C or Visual Basic code, spreadsheets, and imaging processing “building blocks” that can be graphically combined.<br /><br />Most of these paradigms require a developer who is familiar with image processing, for example, knowing about “edge detectors.” To further reduce memory load, newer development tools build image-processing knowledge into domain-specific tools. For example, the DALSA IPD iNspect package provides a “caliper” tool that is familiar to developers doing metrology and that intelligently encapsulates knowledge of “edge detectors.” I think these kinds of development tools are the near-term trend, as they allow more developers to quickly solve most common vision tasks.<br /><br />Longer term, I see three trends. First, the underlying algorithms will become smarter and more like human vision, so that the imaging setup doesn’t have to be as constrained. For example, image-processing algorithms should be better at ignoring lighting changes, so we can be less concerned with lighting. <br /><br />Second, there will be more “machine learning,” so a developer can “show” the system defects and let it learn what these defects are and the best ways to find them. Third, the user interface will continue to evolve to reduce memory load. The vision-system designer of 2037 might develop applications by having a dialog with his or her vision system, the way we now instruct a human inspector.Pennwell Blogs Administratorhttp://www.blogger.com/profile/15757232455847950283noreply@blogger.com0tag:blogger.com,1999:blog-4340430328917098832.post-74445043743061528782008-04-28T11:25:00.016-05:002008-04-30T13:02:06.648-05:00Programming Success<div class="blogmeta"><br /> <img src="http://images.pennnet.com/vsd/blogs/newnedlecky.jpg" align="left"><br /> <br> A discussion with Ned Lecky, Lecky Integration <br /></div><p><br /><span class="blogpreview"><br /><br />Ned Lecky is owner of Lecky Integration, Albany, NY, USA; www.lecky.com, which was established in 2007. He has held numerous positions in the machine-vision and electronics industries and has taught electrical and computer engineering. </span> <p><br /><br /><b>VSD</b>: Please describe your company and its services. What is your personal background in the machine-vision industry? <br /><br /><b>Lecky:</b> I started my career in machine vision after graduating from college in 1984. I was a systems engineer for Control Automation (CA) in Princeton, NJ, where I was lead vision-system programmer and learned a great deal about custom frame grabbers, cameras, lighting/ optics, and motion control for printed-circuit- board applications. I started Intelec Corp. soon after and continued integrating systems for CA, Cognex, and Intelledex. I got the idea to build a better mousetrap in 1992 and wrote Sherlock, the first industrially focused machine-vision software application for Windows. Intelec became part of Imaging Technology, which then became Coreco Imaging, which then became DALSA. The much evolved and improved Sherlock is still available today. Lecky Integration is a small company that focuses on application-specific integrated solutions involving machine vision, fuel cells, robotics, or custom electronics. As a small business, I seek out vendors, integrators, machine builders, and software professionals to form multi-disciplinary teams to solve complete problems. <br /><br /><b>VSD</b>: What technologies and components do you use for your applications? How often do you evaluate competing technologies? <br /><br /><b>Lecky:</b> The distributors or integrators with whom I work often specify cameras and hardware accessories. Many sales leads come through these channels, and it is best business practice to use the hardware and software products that these partners are already representing and supporting. Fortunately, vision hardware and software components are much more inherently compatible than they were even five years ago; this is the blessing and a curse of open system standards. It is a blessing for the consumers of machine-vision technology but can seem a curse to the suppliers who may do a great deal of work to convince a customer that vision is a good solution for their problem, only to have another lower-cost provider come in and sell a competing component for follow-on systems. Ultimately, this is good for the component vendors, since it helps them understand the market, see where costs can be cut, and recognize their own true advantages. However, this “advantage” usually causes bump after bump along the road to knowledge and can be quite challenging. <br /><br /><b>VSD</b>: How do you approach a new application? Do you work with OEMs or other system integrators? <br /><br /><b>Lecky:</b> Know your customer. Know your customer’s boss. Understand the customer’s business, both financial and political. Know your customer’s customers and maybe some of their vendors, too. Solving the technical aspects of the application is usually much easier than these first parts, but a good technical solution that flies in the face of business needs is a huge failure. I’ve been there, and it is not pretty, fun, or profitable. All of our solutions involve teams and partners. Teams take time to build and maintain, and a certain trust develops as applications are solved in a professional way that protects and nurtures each team member’s business interests. I have worked with component vendors, distributors, vision-system suppliers, OEMs, R&D organizations, and technical/ nontechnical end users extensively and interchangeably since 1985. <br /><br /><b>VSD</b>: Recent software developments in image processing include pattern recognition, tracking, and 3-D modeling. What algorithms and specific software developments do you see emerging in the next five years? <br /><br /><b>Lecky:</b> I think we’re totally stuck, actually. I worked on bio-inspired cognitive computing hardware strategies while getting my M.S. and Ph.D. in electrical engineering. I have long felt that the key to the next advance in machine-vision algorithms is to abandon algorithms altogether and to start, instead, emulating the human image-processing system. The dissatisfaction I have with machinevision systems failing to recognize occluded, fuzzy, out-of-expected-location patterns in variable lighting conditions is not really much less today than it was 20 years ago. Try giving a capabilities demo of a state-of-the-art vision system to someone who has never seen machine vision before and you’ll know what I mean. <br /><br />Ten or so years ago, I would carefully hand code new algorithms in MMX assembly language to get optimal speed (on 266-MHz Pentiums!). Now, image-processing libraries are comprehensive and effectively free and offer optimized code that can run on multigigahertz, multicore processors. If anything, the hardware is now too fast and too powerful to be fully engaged. <br /><br />We must find more cognitive and truly intelligent architectures for both hardware and software before major advances can be made. Admittedly, the state of the art is excellent in pattern recognition, tracking, and three-dimensional modeling. However, the accuracy and overall image-processing capabilities of a housefly dwarf those of our current systems, all just using some fraction of a housefly’s million-neuron brain. I find this frustrating. <br /><br /><b>VSD</b>: How do you design your systems for OEM product obsolescence? <br /><br /><b>Lecky:</b>: Optics- and lighting-system product lines are quite stable, and same-or-better replacements are generally available when a vendor discontinues a product. Many of the specialty products are built to order anyway, and vendors are often pleased to build an “old” version of a lighting system or a lens. The camera market is extremely dynamic, but again, the new models tend to be same or better for lower cost, so updating to a different camera is often not very painful. <br /><br />The software, of course, is the issue, especially when the system includes volumes of custom code written to integrate the standard system with a real factory-floor monitoring system. Since 1995, I have focused on Windows- based software using C++ for algorithms and time-critical functions and Visual Basic for operator interfaces and front-ends. This architecture has proven resilient, since this more-than-ten-year-old code will still compile and run on modern computers by taking advantage of the Windows development tools. <br /><br /><b>VSD</b>: In which areas do you see the most growth? What are users demanding from you in the design of new systems? <br /><br /><b>Lecky:</b> In North America, we know that the bulk of the machine-vision industry is in application- specific machine-vision (ASMV) systems, or turnkey solutions that sell for about $125K, on average. This ASMV market is projected at $1.2 billion for 2008, while component sales (cameras, lenses, lighting, and so forth) are projected at less than $200 million. So solving the customer’s problem with a complete ASMV is still where the bulk of the market is. In new designs, users continue to demand systems that are more tolerant of variation in lighting, product, or operator. And we continue to design systems that attempt to minimize variation in lighting and operators, since I still have never seen a reliable machine-vision system that permitted variation in either. Lower cost is always an issue, but the cost has come down so far from the good old $20K vision-system days that it is less important in most applications. Size and power consumption are rarely an issue for our clients. <br /><br /><b>VSD</b>: How will OEM components targeted toward machine-vision applications have to change to meet future needs? <br /><br /><b>Lecky:</b> As the cost and price of machine-vision components continues to tumble, the vendors of these components cannot afford to spend as much time training, supporting, or assisting their customers. This means that the components must be more reliable, self-calibrating, and self-training, and effectively bullet-proof. You see such product components coming out of most of the machine-vision software and system companies already. Unfortunately, one of the best ways to make a component more reliable is to reduce its feature set, which then renders the component less attractive to the broader customer base. So there is a continual rebalancing of feature set vs. ease of use that I think the component and system vendors are really struggling with right now. <br /><br /><b>VSD</b>: Do you work outside North America? How do you think that the machine-vision market differs in different regions such as China? <br /><br /><b>Lecky:</b> I’ve done machine-vision work in France, Germany, Ireland, Singapore, and China. China, of course, stands out due to the size and fluidity of its market in the present era. I was astounded by its competency in steel making, railroad building, and railway freight-car building. <br /><br />The application I completed involved high-accuracy gauging of railway axles after grinding and prior to custom boring a pair of wheels to press onto them. This application required many large machines from several vendors, a football field full of material-handling equipment, and dozens of PLCs communicating with four PCs performing orchestration, control, and database management (see photo). <br /><br />The local engineers were very adept at designing, building, and programming, or at least customizing, all of the equipment. I must say that the first time I saw ladder logic in Chinese characters I was a bit more overwhelmed than I usually am, even by ladder logic. The technical rank-and-file in China can do engineering and programming at the “Western” level, if not better—something that still surprises some of my colleagues. The Chinese people were also very kind, hospitable, playful, and patient with me. In North America, we often thought of machine vision as a labor-saving technology. In China, while labor costs are on the rise, they are still (and perhaps forever) much lower than those in the USA. However, repetitive tasks such as gauging, inspection, and verification are still unappealing jobs that people are simply not very good at, and so, like many others, I see a bright future for low-cost machine-vision components in China. <br /><br /><b>VSD</b>: What new markets and technologies do you see for your company in the future? <br /><b>Lecky</b>: I believe in being very flexible and helping my clients solve their problems in ways that are acceptable and practical. I’m not going to use an FPGA-based solution for a customer who wants solution update control but doesn’t know Verilog or VHDL--it just wouldn’t be prudent. <br /><br />I think the key to my staying power in the industry over the years has been a willingness to understand the customer’s business issues and corporate structure and to use these components just as much as the technical ones in tailoring solutions to their problems. Many of the professionals in our industry will be nodding their heads as they read this, I bet. So, at least in that way, we will continue to follow our customers, not lead them.Pennwell Blogs Administratorhttp://www.blogger.com/profile/15757232455847950283noreply@blogger.com1tag:blogger.com,1999:blog-4340430328917098832.post-69482160228854915362008-03-18T14:56:00.024-05:002008-03-19T12:32:38.843-05:00Complex Thinking<div class="blogmeta"><br /> <img src="http://images.pennnet.com/vsd/blogs/papenew.jpg" align="left"><br /> <br> A discussion with Katrin Pape, CTMV<br /></div><p><br /><span class="blogpreview">Katrin Pape is cofounder and managing director of CTMV (Consulting Team Machine Vision; Pforzheim, Germany; www.ctmv.de). She has worked in numerous positions in the machine-vision industry. </span> <br /><br /><br /><br /><br /><p><b>VSD</b>: Please describe your company and its services.<br /><b>Pape</b>: CTMV is a solution provider with experience in the field of image processing. Its founders draw on expertise in developing new application areas and complex application scenarios and their implementation in workable business solutions. Applications are focused on surface inspection of visually difficult materials such as glass, ceramics, various metals, plastic tubes, and foils.<br /><br />In addition, we provide precise dimensional- accuracy verification, mainly for the stamping industry, and on presence/absence checks in applications such as the assembly of gear shafts, as well as position detection of moving and/or complex parts for robots and handling. CTMV offers business solutions for quality assurance in the fields of medicine, electronics, and automotive, as well as for process optimization of manufacturing in the fields of metal working, extrusion, and foil/paper production. <br /><br /><b>VSD</b>: What technologies and components do you use for your applications? How often do you evaluate competing technologies? <br /><b>Pape</b>: Depending on the inspection task and its general framework, we use area- or linescan camera technology combined with appropriate standard interfaces based on camera buses or network technology, as well as PC or embedded compact vision systems. We continuously evaluate new products and work in close cooperation with standard component suppliers. The decision on whether new products will finally be implemented in standard applications is based on their specific benefit and on whether they help solve problems in a more reliable, robust, and possibly cost-efficient way. <br /><br />Application-specific software with straightforward user interfaces and reliable intelligent algorithms for feature extraction and analysis will usually be implemented by CTMV. Developing field-specific analysis software and tools with a minimum set of parameters but a maximum of intelligence and performance is one of our core competencies. <br /><br /><b>VSD</b>: How do you approach a new application? Do you work with OEMs or other system integrators? <br /><b>Pape</b>: Conceptual design of new applications is the key intellectual property that distinguishes us from our competitors. With our broad experience basis as a team, we continue to be able to design new approaches and concepts. This starts by determining the appropriate method of image acquisition, as well as highlighting test criteria with the adequate optical and illumination setup. The process includes developing reliable and robust methods for analysis of the image content, and continues to integration into the process chain with adequate signal and data exchange. <br /><br />Key requirements are usually specified by customers or mechanical engineers. The diversity of customer and field-specific needs—in each case based on a combination of image processing and process engineering— opens up completely new applications. One example is a system for continuous tube inspection during extrusion, with detection and classification of process-related defects combined with special failure management and alerting. Another is quality control of metallic gear wheels during production, detecting scratches, cracks, and broken parts (see figure). <br /><br /><b>VSD</b>: How do you design your systems for OEM product obsolescence? <br /><b>Pape</b>: Our safeguard here is working exclusively with industry standards. Components are exchangeable quickly and easily, so our customers get continuous, reliable functionality. We are independent of operating systems because we cooperate with reputable partners such as National Instruments (Austin, TX, USA; www.ni.com). These companies develop and provide system and field-independent standard components that meet industrial camera interface and automation standards. <br /><br /><b>VSD</b>: What are users demanding from you in the design of new systems? <br /><b>Pape</b>: The general expectation is the application of modern, open standard technologies. Based on the latest developments on the components market and the way we integrate these trends into our system concepts, we meet these expectations in every respect. <br /><br />The bigger challenge, however, is the balancing act that is needed to cover the span between the requirements for maximum identification performance—frequently using highly complex methods of analysis— and the simultaneous required ease of use. Examples of this are the stamping industry, as well as packaging and food, or presence/absence checks and version monitoring. <br /><br />In these cases, the variety of parts to be inspected is high, and operators want to be able to create test plans for modified or new products themselves. This requires robust test cells on the one hand, and on the other hand adaptable software with respective part management and the right balance between parameterization possibilities and hidden advanced functionality. <br /><br />To meet these requirements, we generally develop customer-specific user interfaces and implement easy-to-use setup and inspection plan editors with a minimum set of algorithms, tailor-made for the respective application or field of industry. We make a point of thinking in and implementing the language of the respective user or application environment and not that of image processing. Based on these key strategies, we provide users with the necessary flexibility without requiring them to have specific knowledge of image processing. <br /><br /><b>VSD</b>: How will OEM components targeted toward machine-vision applications have to change to meet future needs? <br /><b>Pape</b>: Scalability and adaptability are required not only for software, but also for system bases. We often encounter the issue that initial specifications are later enhanced, further inspection tasks are added, or solutions have to be adapted to specific customer environments—all while keeping development efforts as low as possible. The ongoing standardization in the field of camera interfaces is an essential step and key factor in this respect. It enables us to meet these requirements today and should definitely be pursued further into the future. <br /><br />Another key point is that, as a rule, vision systems need to communicate with a complex network of instrumentation, control, and drive systems. The direct combination of industrial interfaces for process communication such as Ethernet, digital I/O, and so forth, with image-processing components is beneficial for us. Application of various modules, including those from different suppliers, can be minimized or might not be necessary at all. This saves time and limits technical risks. <br /><br />Examples are modules that combine the camera bus with FPGA-based, digital I/O. The camera bus ensures the image acquisition, while the complete timing, trigger handling, and partly complex encoder handling in real time are done by the I/O module. In this way, and with the systems capable of being operated within networks, we can now solve 95% of inspection tasks using area-scan cameras. We would wish for additional, similar technologies—for example, with linescan or network-based camera interfaces (GigE). <br /><br /><b>VSD</b>: How do you think that the machine vision market differs in Germany from that in other parts of Europe or North America? What changes have you seen recently in the German market? <br /><b>Pape</b>: Our customers usually are machinery manufacturers who provide sales and servicing of complete machinery and equipment including optical inspection systems. Machinery incorporating CTMV vision systems has been installed all over North America and Europe. General trends are global. However, in Germany, there is intense competition between suppliers of components and system integrators. On the other hand, mechanical engineers increasingly ask for industrial image processing solutions, identifying these as substantial competitive edge. In general, we see a market differentiation toward plug-and-play solutions that are easy to configure and will not require system integrator services in the future. At the same time, some customers are interested in solutions for new, complex inspection tasks that have not been tackled so far. This is where we as CTMV can provide know how. <br /><br /><b>VSD</b>: What new markets and technologies do you see for CTMV in the future? <b>Pape</b>: In the next few years, we see continuous opportunities for growth. CTMV will focus on further development of business solutions in medical, electronics, plastics, and automotive markets. And, we will develop new, innovative image-processing solutions within the scope of strategic partnerships with mechanical-engineering companies.Pennwell Blogs Administratorhttp://www.blogger.com/profile/15757232455847950283noreply@blogger.com0tag:blogger.com,1999:blog-4340430328917098832.post-77058089831434484192008-02-28T12:45:00.009-06:002008-02-28T14:47:50.712-06:00Developing new machine-vision software applications<div class="blogmeta"><br /> <img src="http://images.pennnet.com/vsd/blogs/neurocheck.jpg" align="left"><br /> </div><br /><span class="blogpreview">Christian Demant, NeuroCheck and Industrial Vision Systems, on the demands of advanced development platforms</span> <br /><br /><br />The fast development cycles in the IT industry result in permanent pressure on machine-vision software developers to update their knowledge. Integral parts of that continuous learning process are new networked system architectures and software-development tools. <br /><br />Advanced new development platforms like .NET provide the option to easily combine software modules written in several different programming languages. This supports the development in teams having different professional backgrounds. Therefore, developers managing these teams must have knowledge in all these programming languages. <br /><br />The availability of the latest multicore CPU technology demands multithreaded software to take advantage of this new processing power. Enabling software for parallel computation adds a totally new level of complexity to the development process. The normal programming approach, being used for dozens of years, is now, in many cases, obsolete. The synchronization of parallel executing threads leads to new error situations, which are extremely difficult to predict and to simulate in advance.<br /><br />In the future I see growing importance of an in-depth understanding of software design patterns. The growing size and complexity of machine-vision software applications requires a much more systematic approach during the design and planning stage. Even 10 years ago a software developer started the implementation a couple of hours after discussing the specification with his sales department.<br /><br />The software-development process has moved toward something comparable to a team of architects in charge of a complex facility. The design of the building and the drafting of the plans and specifications are the main intellectual tasks and take a big part of the development time. The implementation afterward is handcraft, but both jobs together require skilled teams. Lone fighters have no chance of survival.<br /><br />Christian Demant is general manager, NeuroCheck, Stuttgart, Germany, and <br />Director, Industrial Vision Systems, Kingston Bagpuize, UKPennwell Blogs Administratorhttp://www.blogger.com/profile/15757232455847950283noreply@blogger.com0tag:blogger.com,1999:blog-4340430328917098832.post-2216884934161901472008-01-29T10:36:00.001-06:002008-01-31T12:43:58.506-06:00Then and now: 20 years of machine-vision system integration<div class="blogmeta"><br /> <img src="http://images.pennnet.com/vsd/blogs/dechow4.jpg" align="left"><br /> </div><br /><span class="blogpreview">David Dechow, aptúra Machine Vision Solutions, on the changing role of the machine-vision system integrator</span> <br /><br />Within the context of a machine-vision application, integration quite simply is where someone has to actually make things work. With that definition, the role of the machine-vision integrator has changed little from “ancient times,” when we interfaced vidicon cameras with plodding computers to check the presence of an object based on 32 levels of gray. With scant few exceptions, today’s machine-vision devices do not yet arrive from the manufacturer shrink-wrapped, fully configured, ready to plug in, turn on, and perform an inspection. Machine-vision technology is a combination of diverse components that must be correctly selected based upon the needs of the application, competently installed, programmed, or configured to provide a robust result, then tested to ensure reliability that will withstand a production environment. Bottom line: a machine-vision application still must be “integrated”; someone has to “make it work.”<br /><br />What is new is that machine vision has become a technology that today is significantly more accessible to the plant engineer than it was even several years ago. Yet the need and demand for the machine-vision system integrator is as strong as ever. What has changed ever so slightly is the perceived ROI or value of the integration partner. At one time the machine-vision system integrator was absolutely required even to consider an inspection project. The prevailing perception now is that the inspection task could be successfully done “in-house”; but it is more efficient and effective to use an outside integration resource for machine vision. A parallel situation occurred in the maturation of the PLC integration market, where today it is very common for a company to hire contractors to develop and maintain machine logic rather than have a team of company experts.<br /><br />Looking ahead, the machine-vision integrator likely will continue to be an engineering resource for end users, machine builders, and other integrators, providing services on a contract or time-and-materials basis. Integrators will be called upon to execute more complicated inspection systems and will need to maintain relatively higher levels of machine-vision expertise. The barriers to entry into the vision integrator marketplace remain low, but the name of the game is efficiency and profitability, and the machine vision integration entity will increasingly need to find economies of scale that will sustain the business model.Pennwell Blogs Administratorhttp://www.blogger.com/profile/15757232455847950283noreply@blogger.com0tag:blogger.com,1999:blog-4340430328917098832.post-14805887894277417702007-12-10T09:04:00.000-06:002008-01-23T16:02:07.194-06:00Working with a machine-vision-system integrator<div class="blogmeta"><br /> <img src="http://images.pennnet.com/vsd/blogs/nagle_photonew.jpg" align="left"><br /> <br> A discussion with John Nagle, Nagle Research<br /></div><br /><span class="blogpreview"><b>VSD</b>: What sort of systems or services does Nagle Research provide?<br />Nagle: While many companies are involved with 2-D machine vision...</span> <br /><p><b>VSD</b>: What sort of systems or services does Nagle Research provide?<br /><br />Nagle: While many companies are involved with 2-D machine vision, we have decided to devote ourselves almost exclusively to 3-D machine-vision development. We think this has allowed us to build an expertise and body of experience with 3-D technology that is second to none in the industry. Nagle Research is a SICK (Minneapolis, MN, USA; www.sick.com) vision integrator. We are entirely brand loyal to SICK|IVP vision products, most often using the Ranger series of cameras. We started the company in June 2003.<br /><br />My business partner, Andy Thyssen, is also a software engineer, and he is the chief technology officer of the company. Our first project and what is generally regarded as our “claim to fame” is the Aurora automated high-speed railroad track-inspection system. In our past lives, we spent more than a decade making video games for Nintendo, Playstation, and others. That experience has been immeasurably valuable in keeping the performance of our systems on the leading edge.<br /><br /><b>VSD</b>: What sort of questions should be asked when considering the services of a machine-vision system integrator?<br /><br />Nagle: Suffice it to say, it is impossible to engineer a solution without a thorough understanding of the problem. But to truly know the problem, you have to get past the superficial goals and get to the meat of the challenges that any solution will have to face. There can be many gremlins hiding just below the surface of what seems like an “easy” project.<br /><br />For example, a candy factory needs a vision system that can count jellybeans moving down a conveyor belt. That’s the superficial goal. Obviously this is a very straightforward task for a vision system to accomplish. To be able to intelligently plan a solution, however, requires much more information. What should the system do with the count? Does it need to trigger a signal when a certain count is reached? Does it need to communicate with a PLC? What if a jellybean is malformed, does it count? And how does the system determine what is a “good” jellybean? How fast are the jellybeans moving? Do we need to count the individual colors? What are the space considerations for the vision system? This is very “goal-oriented” fact-finding research, and so this sort of questionand- answer probing can be done even by nontechnical people. Once all of the major and minor goals are known, then it is straightforward to isolate the specific disciplines and skill sets required to make the project a success.<br /><br /><b>VSD</b>: So what can be done in-house by a company?<br /><br />Nagle: Evaluating one’s own capabilities or the capabilities of company staff members is the next step in deciding how much, if any, of the project can be done in-house. If the project can be accomplished with off-the-shelf vision solutions or relatively simple smart cameras and only minor external connectivity is required, then the chances of being able to do this are good. If complicated record keeping, PLC connectivity, or advanced image-processing algorithms are required, it is almost certain that a third-party vision-system integrator with software-development capability will be necessary.<br /><br />Different skills are required to integrate vision systems of varying degrees of complexity. Even a good list of necessary skills cannot be comprehensive and should be treated only as a guideline or rule of thumb.<br /><br /><b>VSD</b>: What are the implications of working with a 2-D vs. 3-D system?<br /><br />Nagle: Most people who have experience with vision are likely to have worked only with 2-D systems. Far less common are those who have worked with 3-D. Twodimensional systems deal with color and contrast; three-dimensional systems deal with materials and geometry. The share a lot of the same concepts, but, in general, 3- D is more difficult to implement. This is because now we are not just dealing with a light and a camera, we have to deal with laser light frequency; beam spread angle and thickness; laser power requirements based on material properties and stand-off distance; ranging algorithms; angular orientation of camera/subject/laser to obtain required accuracy; safety issues related to working with the laser; and coping with less than ideal material properties.<br /><br />Integrating a SICK-IVC-3D or a Ruler product can mitigate some of these issues, in that the camera lens, laser type, and orientation are fixed at the factory (which also limits to some degree their applicability.) Ranging algorithms and material properties must still be dealt with in any case.<br /><br /><b>VSD</b>: Is a vision software-development kit difficult to learn?<br /><br />Nagle: In any nonsmart camera system, the integrator must have a thorough knowledge of the vision hardware SDK (software development kit), including the SDK for the frame grabber if applicable. Speaking in general terms, these are highly nontrivial software toolkits and a deep-rooted foundation in C++ and software development is essential. Even with the requisite C++ experience, the SDK itself—like any complex system—has a learning curve. If the project can absorb the extra time and cost associated with becoming proficient with the SDK, then it is very feasible.<br /><br /><b>VSD</b>: What are the benefits of third-party integration?<br /><br />Nagle: Any competent vision integrator should be able to integrate vision in simple to moderately complex projects. Many vision integrators do not have great depth in software and electrical engineering, and so for many the more complicated vision projects are beyond them. When choosing vision integrators do not have great depth in software and electrical engineering, and so for many the more complicated vision projects are beyond them. When choosing an integrator, it becomes very important to match the skills they bring to the table with the skills that will be required. Dealing with an integrator can save an enormous amount of time and development effort. In many cases, experienced integrators have saved companies from spending hundreds of thousands of dollars on inappropriate equipment and software.<br /><br />For example, we were asked by a railroad-equipment manufacturer to provide consultation as to what camera would be required for a 2-D high-speed-railroad-inspection system. The company had already spent many thousands of dollars on image-processing software to locate defects in crossties using 2-D imaging techniques. The problem was that their approach had not accounted for surface stains, sealant, and debris confusing the analysis software. We ultimately concluded that a 3-D solution was more appropriate for this application and developed a Ranger-based solution that handles these material properties nicely.<br /><br /><b>VSD</b>: When working with a system integrator, what are you paying for?<br /><br />Nagle: Speaking only for Nagle Research, in most cases vision projects are quoted on a flat fee basis. Usually the process is phone conference to discuss the challenges and goals; if possible, samples are sent for testing and proof-of-concept; and if the project proves solvable, we submit a proposal.<br /><br />With projects whose goals are a moving target--for example, additional defects to detect or additional accuracy requiring more cameras--there will most likely be proposed a flat fee for a defined scope of work and a standard hourly fee for work that is out of scope. The proposal will include time for travel, but the travel expenses are billed separately.<br /><br />For our fee, the client receives our professional consultation, software and electrical engineering resources, and, in the end, a solution that meets their requirements. In most cases, unless specifically agreed to, the client does not get source code to the final solution. In some arrangements we will relinquish source code for the application, for example, their user interface and project-specific algorithms. Our proprietary Javelin Vision Engine, however, remains closed source. Javelin is the 3-D technology infrastructure to help us in developing more robust vision systems<br /><br /><b>VSD</b>: What are the fundamental questions to ask before calling an integrator?<br /><br />Nagle: The basic questions that need to be answered before an integrator is called are • Is the project outside the scope of in-house capabilities? • Is the company open to using third-party integrators? • What is the price of failure or delays arising from lack of internal experience? • Is there a budget for vision that includes third-party integration? • Is there likelihood that given a workable solution within budget, the project would proceed?<br /><br />If the answer to all of these is “yes,” then most any integrator would be willing to take the challenge. A competent vision integrator is the key to successfully deploying a machine-vision system. Whether or not that expertise comes from within or from a third party is a decision the client ultimately will have to make. The most important thing to keep in mind is that in any event, a broad skill set and expertise in a variety of disparate disciplines will be required to complete the project on time and on budget. </p>Pennwell Blogs Administratorhttp://www.blogger.com/profile/15757232455847950283noreply@blogger.com0tag:blogger.com,1999:blog-4340430328917098832.post-82701576100350547602007-10-24T08:24:00.002-05:002009-03-23T09:06:24.213-05:00Inspection systems focus on performance, robustness, and stability<div class="blogmeta"><br /> <img src="http://images.pennnet.com/articles/vsd/cap/cap_0710vsd_massen.jpg" align="left"><br />A discussion with Robert Massen, Baumer Inspection<br /></div><br /><span class="blogpreview"><br />VSD: What systems or services does your company provide? What is the origin of your company?</span><br />VSD: What systems or services does your company provide? What is the origin of your company? <br /><br />Massen: Our company started via a management buyout from a former for-profit institute, which I founded together with the Steinbeis Foundation at the University of Applied Sciences in Konstanz, Germany, around 1982. We have a history of 25 years in machine vision, starting with a large number of industrial customer projects and focusing now on in-line inspection of aesthetic surfaces: multicolored, patterned, and textured products such as laminate floorings, decorated furniture panels, ceramic tile inspection. We design, install, and service worldwide in-line inspection, sorting, and process-monitoring vision systems and are a leader in the field of automated laminate flooring inspection. <br /><br />VSD: What technologies and components do you use in machine-vision-related applications? How often do you evaluate competing technologies? <br /><br />Massen: To simulate the very peculiar human perception of decorated multicolored surfaces and at the same time detect physical defects such as scratches, bubbles, bad transparent protection layers, chipped edges, and so forth, we combine multiple camera/ illumination systems into our multisensorial inspection technology, including color linescan cameras with diffuse ultrastable illumination, black-and-white linescan cameras and directed light (4k and 6k, mostly), and spectrally tuned linescan cameras and specific wavelength illumination for the inspection of transparent protective layers. <br /><br />We prefer Camera Link frame grabbers, possibly with a bit of integrated FPGA preprocessing. We use a PC cluster architecture for achieving the high computing power required for that type of color and texture processing. To be flexible to the ever-changing aesthetic designs of very creative artists, we use almost 100%software-basedimage processing with our own libraries of image-processing algorithms. <br /><br />Almost half of our staff of 50 employees are software and vision specialists, who do checkout possibly competing technologies. We have a continuing education philosophy, sending our vision specialists to conferences and having some of them working as part-time Ph.D. students. <br /><br />VSD: How do you evaluate the performance of the few color linescan cameras on the market? Which cameras do you use in your designs? <br /><br />Massen: A reasonably priced, ruggedized color linsescan camera with high geometric resolution, fast linescan frequency, very low color seams, operating at variable product speeds, and radiometrically stable under severe temperature fluctuations of an industrial production line is still a bit of a dream. We never trust the published technical specifications of the camera manufacturers and even less those communicated by distributors, but we do tests these cameras extensively in our labs. We often discover subtle, but technically important flaws or items missing in the published specs. We always have to do a careful selection of appropriate lenses that are hard to find, both for trilinear and especially for 3CCD prismatic color linescan cameras. <br /><br />VSD: How do you design your systems for OEM product obsolescence? <br /><br />Massen: As our systems are highly modular PC-and software-based architectures, we have no problems in adapting to a new generation of motherboards, multicore processors, or new operating systems. We program in a very modular way in standard C/C++, separating software and hardware; so changing to a new camera or frame grabber does not pose any problem. Our customers appreciate this guarantee of long operating life and of a familiar PC and Windows environment, even if arranged as clusters of up to 16 networked PCs. <br /><br />VSD: In which areas of the industry do you see the most growth? What are users demanding from you in the design of new systems? <br /><br />Massen: The broad field of nonindustrial vision markets such as security, traffic, and toll control; electronic driver assistance; postal distribution; and logistics will grow at a faster rate than the vision market for machine-vision systems operating in the production line. These systems will use a lot of components, software, and knowledge existing in the machine vision scene. Some established machine-vision companies such as Vitronic (Wiesbaden, Germany; www.vitronic.com) are expanding quickly into these new markets parallel to their ongoing machine-vision activity. <br /><br />A ColourBrain Laminate Inspection System from Baumer Inspection examines laminated fullboards at a Pergo production facility in Garner, NC, USA. Our customers are using faster and faster running, highly automated, almost unmanned production plants for an evergrowing variety of decorations and for production batches ranging from hundred of thousands to batch size one (such as in automatic kitchen-producing plants). <br /><br />They ask us to offer them simple-to-use and stable inspection systems for their highly complex inspection tasks. For automatic very-high-speed visual sorting, they need very low overdetection rates (wrongly classifying good products as being bad), fast training for new décors, and an integrated automatic process-monitoring and quality management system. <br /><br />VSD: How will OEM components targeted toward machine-vision applications have to change to meet future needs? <br /><br />Massen: In our specific segment of inspection in production lines, the focus is more on technical performance, robustness, and stability then on price alone. We would appreciate better color linescan cameras operating without false-color seams at wide observation angles to decrease the height of our systems. We would also appreciate the development of cameras that integrate several narrow-spectral-band linescan sensors and fast 3-D sensors in one camera body. <br /><br />VSD: Could you compare the machine-vision markets in different industry segments in Europe and the rest of the world? <br /><br />Massen: The European machine market is highly dominated by German companies that produce some 82% of the European turnover. The German machine-vision scene is extremely active, both in the field of application-specific vision systems and also for vision components (cameras, frame grabbers, illumination systems, and software libraries). The specific excellence of the German “Sondermaschinenbau” (specialized highly automated production machinery)is closely related to German machine-vision companies, which has given a definite push to both. A good example of this process is the recent acquisition by the German Weinig Group, a leader in wood-processing machinery, of LuxScan Technologies, a wood scanner company in Luxembourg. <br /><br />I do see a good and hard-to-copyf uture for similar marriages between advanced production machinery companies and vision companies. At the same time it is astonishing to see German companies such as Basler (Ahrensburg, Germany; www. basler-vc.com) exporting high volumes of cameras produced in high-wage Germany to low-wage China, again a proof that technical excellence and professionalism can compete with low salaries. <br /><br />VSD: Could you discuss the impact of working with Baumer? What has that meant for your business? How are you now organized? <br /><br />Massen: The Swiss Baumer Group, a family owned group of some 35 companies with a total of 2000 people, invited MASSEN machine-vision systems as a shareholder in 1992. Baumer was a great help in moving our activity from a more project-based institute activity to a product-based company by focusing on a small number of markets. The 100% integration into the group is therefore not a surprising move but a natural development. The rebranding into Baumer Inspection increases the visibility of Baumer as a unique group of companies producing the total spectrum of noncontact sensors, from classical proximity to vision sensors, intelligent cameras, and application-specific machine-vision systems. <br /><br />We are a member of the vision technologies business unit of Baumer, which employs some 250 people. This is very broad range of expertise that helps us both in view of vision components and technologies available from the group and as a worldwide machine-vision business supported by the presence of the group´s subsidaries. <br /><br />ROBERT MASSEN turned a for-profit research institute in image processing into the private MASSEN machine vision systems GmbH, which recently became part of the Baumer Group and was rebranded as Baumer Inspection. Editor in chief Conard Holton spoke with him about integrated system design.Pennwell Blogs Administratorhttp://www.blogger.com/profile/15757232455847950283noreply@blogger.com0