Technical paper: Imaging Modules and AI in Quality Control
Topic: Measurement technology, image processing, CAQ and AI
Author: Markus Riedi, Opto GmbH
Date: 16.03.2020
AI has arrived in society. We are in the middle of the 4th Industrial Revolution and are confronted with fundamental changes in our global economic structure. The ongoing corona crisis will also fundamentally change the way we live and work. The vulnerability of existing processes will perhaps give a huge boost to digitalization and automation and reduce prejudices against new ways. Just like in society, industry should also be able to test this insight in mature new methods. There are already many successful examples in medical technology, drug research and pathology.
In production, every quality assurance officer wants to have a statement on product quality as quickly as possible in order to control his production process. Unfortunately, determining the shape and position, the colour, the tribological properties and all other criteria that make up the function of a product is currently only possible with different measuring instruments, one after the other, and requires a lot of time and money.
What if there were a way to combine all this with a simple, inexpensive, non-contact, fast and environmentally independent analyser? Ingenious, but unrealistic.
Until now.
The idea is to take human evaluation of surfaces as the basis for digitization. Together with the mass available measurement cycles and experience about the product and the information about it known for years, a network is fed and with a solino reflex analysis anomalies are detected to an ideal situation. The resulting data set contains all product-relevant information, which only needs to be analyzed and classified after changes.
This makes it possible to work out the traceability to standards, to generate measurement results or to use the data directly for production control. This would close the control loop for artificial intelligence. Sounds simple, but it is. Nevertheless, it will take some time before we believe these results, because the decisions about good and bad are not traceable according to standards, but have been created in the data room. But also Google bases the decision cat or dog not on image information but on the comparison of its digital twin and is not bad at it. However, Google's dataset is only a fraction of the one available here, and the tolerances between good and bad are also simpler.
As you can see here in the example, this information can be easily put back into image information and e.g. only the impurities can be read out according to particles or fingerprints. But this only costs computing time and requires a further analysis according to existing image analysis procedures in order to make a decision.
We have started to test deep learning tools and big data algorithms together with the unique Solino technology, which allows us to start the classification not on the basis of images but on data sets containing ALL anomalies of the test object. This enables us to use the solino technology to simulate human perception, rather than using standards to evaluate the quality of a product, as is the case with traditional defect specification solutions.
Opto is on the way to develop new AI modules based on the constantly growing imaging module family, which are equipped with our own solino technology. More than 40 years of experience in the development of cameras with integrated optics and illumination in microscopy and machine vision as well as the long tradition in programming own image processing solutions is the basis of these new modules. Currently more and more machine manufacturers, system integrators and corporations are working on integrating the plug&play OEM solutions around the described imaging modules from Opto into their devices and processes to achieve better results and a reduction of equipment costs in the long term. This disruptive approach may be integrated faster than expected due to the upcoming changes and will become accepted as a replacement for traditional measurement methods. If AI can detect and analyze diseases faster and better, new vaccines can be developed faster and better, why shouldn't the results speak for themselves and prevail in traditional measurement technology as well?