Rainer Obergrußberger has an extensive background in image processing and analysis and is the managing director of in-situ (Sauerlach, Germany; www.in-situ.de). Editor in chief Conard Holton spoke to him about integrating machine-vision systems and developing innovative products.
VSD: What does your company do; what services does it provide?
OBERGRUßBERGER: in-situ is a growing company with more than 20 years of experience in the fields of image processing and machine vision. We specialize in industrial, medical, and scientific applications, offering image-processing systems in a broad range of products, as well as specialized development of hardware and software. Our business units are PC vision, smart-camera applications, and OEM products. In-situ means literally “without change of location.” In our business, in-situ means measurements using noncontact methods.
VSD: What is your personal background in the machine-vision industry? OBERGRUßBERGER: After an apprenticeship in the electronic industry, I studied computer science at the University of Applied Sciences in Rosenheim, where part of the course was focused on image analysis. During the last few years of my course I worked as a parttime student for the image-processing lab at the university. I also worked part-time for the company vidisys on software for a tool presetting machine, to capture and measure geometrical shapes of metal-cutting tools with a camera setup.
I started at in-situ in 2001, first as an application engineer, later as CTO, and since the beginning of 2007 I am the managing director. Three years ago I started a master’s degree course with focus on computer graphics and image analysis. My master’s thesis was about the analysis of patterns in the biological field, and I was lucky enough to spend eight months at the University of Queensland, Australia, to complete this work.
VSD: What technologies and components does in-situ use? How do you evaluate competing technologies?
OBERGRUßBERGER: We use a wide range of hardware from different companies, depending on the application we want to develop. Analog cameras are now playing only a minor role in the applications of in-situ. Digital interfaces are more flexible regarding signal transfer and scalability.
CMOS cameras still lack sensitivity and image quality, but are usually cheaper than CCD-based hardware and often sufficient. As more and more digital camera types support standard computer interfaces, such as FireWire or Gigabit Ethernet, frame grabbers are often no longer needed. To evaluate competing technologies we must sometimes make lengthy tests with new hardware to discover the benefits and limitations of the new technology. As this can be quite time-consuming and expensive, we always try to rely on an approved hardware for as long as possible. We don’t get on every new technology train that comes along, just because it’s trendy! Usually it takes some time for a new technology to compete with or even replace an old but well-proven one. In general, the competence of a solution provider such as my company, is to decide which technology, including camera, illumination, and frame grabber, best fits the requirements of a customer´s application.
VSD: How do you approach a new application? Do you work with OEMs or other system integrators?
OBERGRUßBERGER: A new application is always a challenge, especially when there is little or no experience with the customer´s products or business. In my opinion, you should spend as much time as possible exploring what the customer´s application is all about and also thinking about what possibilities exist to provide the best solution. For example, in the beginning I don’t just speak to the boss of a company, but also to the worker who is confronted with a certain quality problem in a production line every day.
I find it best to have a look at how the company deals with certain quality problems, and at internal company processes. To fulfill a customer’s requirements, you have to completely understand how the company works. We always start an application by developing a proof of principle. If such a proof doesn’t take longer than one day, we do this for free. It’s the first step to show a potential customer the capabilities of the company. If possible, we try to get test samples covering the complete product spectrum, from good parts to faulty parts.
We’ve got various optical setups that we can use to test the provided parts. These include laser scanners, linescan and areascan camera setups with different resolutions and lenses, as well as smart cameras. We don’t develop hardware such as cameras or frame grabbers, we simply buy them from vision component suppliers such as Stemmer Imaging (Puchheim, Germany; www.stemmer- imaging.de) or Cognex (Natick, MA, USA; www.cognex.com). However, it’s a bit different with illumination. One of the most important things for providing a proper vision solution is having the most appropriate illumination concept. If standard illumination can’t satisfy the requirements, we develop special ones ourselves.
VSD: How do you design your systems for product obsolescence?
OBERGRUßBERGER: Unfortunately, normal obsolescence of a vision system always comes along with the life cycle of the product that the system was developed for. To be able to fulfill further unknown requirements, hardware and software should be flexible to different configurations and new tools.
At in-situ, we increasingly try to develop software in a reusable way. A proper camera interface should support many kinds of cameras— those available now and in the future. As we can’t handle the wide range of hardware alone, we use standard software tools such as Cognex Vision Pro or Stemmer’s Common Vision Blox. Software should contain options for data storage, statistics, and an easy-to-use graphical user interface. Quite often it is a trade-off between maximizing flexibility on the one hand and software that is well adapted to the customer’s requirements on the other.
VSD: What algorithms and specific software developments do you see emerging in the next five years?
OBERGRUßBERGER: Many of the basic ideas of image analysis were founded in the 1970s, including edge detection, template matching, or image filters. However, the algorithms have improved greatly in the last 20 years, and new approaches have been created. Higher computing performance allows better and more complex algorithms to be calculated in real time and, therefore, can be used for machinevision applications.
Vectorized pattern-matching has opened new dimensions in pick-and-place applications, as well as in product identification. Twenty years ago such algorithms would have been called “coffee algorithms,” as there would have been enough time to drink a good cup of coffee in the time it took to process a single image! Another good example is the shape-from-shading principle that we picked up three years ago. This also originated in the 1970s. We worked on optimizing hardware and software and are now able to extract 3-D surface data in real time. I think there will be new approaches for better and faster 3-D pattern-matching in the next five years. The “grip in a box” is still hard to solve and probably one of the biggest challenges in robot applications—in this case, a reliable and fast 3-D model finder would revolutionize the possibilities. In addition, tracking has become very important for camera surveillance and automatic guidance systems. Automatic and dynamic traffic control supported by cameras will be the reality in the near future.
VSD: How will OEM components targeted toward machine-vision applications have to change to meet future needs?
OBERGRUßBERGER: I think that OEM components will have to be packed with intelligence, reliability, and flexibility, and that there will have to be standards. The vision sensor market has been increasing a lot in the last few years. Traceability opened a broad and lucrative market for ID reading and verification. OEM components for complex solutions will always have to be integrated by professional solution providers such as my company, but the vision-sensor market is targeted toward the machine builders with less experience in machine-vision issues. Therefore, vision sensors have to be easy to use, easy to understand, and very stable in different environmental conditions.
For interaction between different systems there will have to be standards for communication protocols, for hardware adaption (for example, power supply, cables, and mounting) and for software interfaces. For in-situ, the GenICam standard has already been a large and important step toward a common and generally used camera description language. But it’s not just the components that have to change to meet future needs. In addition, the way in which vision solutions are provided and integrated will have to be increasingly customer-oriented. Vision systems shouldn’t be just a black box in a production line. Customers require adequate training on a system to learn how it works and how they can fix certain problems themselves. Every system, however small and simple it might be, must be properly documented and described to the customer.
VSD: What new markets and technologies do you see for in-situ in the future?
OBERGRUßBERGER: In the next few years the company wants to establish a new business unit in 3-D surface inspection. After a major modification in hardware and software, we developed a new shape-from-shading technology that allows reconstruction of 3-D surface data from a wide range of materials. With DotScan—a vision system to inspect embossed Braille printing on pharmaceutical folding boxes—we have already shown that this principle works well in an industrial environment. Our patented inline version, called SPARC (surface pattern analyzer and roughness calculator), allows the inspection of surfaces in various fields of view and accuracies down to a few microns on moving parts. In November 2007, SPARC won the Vision Award prize for applied and innovative vision technology at the VISION trade fair in Stuttgart, Germany.