How hyperspectral image data supports machine vision
Find out how easy this can be by watching the video from our presentation at CHII 2018 in Graz:
Spectroscopy enables its user to identify spectral features that are not visible for common cameras or the human eye. Those features usually are directly related to the optical properties of the analysed surface. Due to the fact that each material has a different spectral signature such data has the potential not only to separate specific materials from others, but also allowing it to make qualitative statements on the analysed object. Spectral imaging in a next step allows to examine a spatial distribution of different materials and quality differences.
Quantification, qualification and classification
A focus therefore is to identify different materials and surfaces where the human eye is not able; spectral features might be too small to recognize, the information may be hidden in the near infrared, or the eye is simply too slow during moved processes. Image classification techniques help identifying those differences and quantifying the result. Hyperspectral imaging for the supervision and evaluation of industrial processes can indeed support and even automatize decisions, speed up those processes and save money in the end.
Setting up an appropriate software application for the information retrieval out of the spectral data usually involves lots of development, testing, evaluation and so on. This is in most cases very time consuming and developments often suffer on a lack of expertise. The reason for that are sophisticated requirements, such as to mathematics, statistics, remote sensing, optics, programming, etc.
Providing a solution
In order to overcome these requirements we now are able to support our customers. Through our collaboration with perClass BV, a software company developing tools for interpretation of spectral images and machine learning solutions, the user is able to (1) record spectral data, (2) use this data to create a statistical classifier for specific materials and (3) apply this classifier on the live data stream as plugin to the Cubert Utils software – all within minutes.
The software perClass is a classification tool that is based on machine learning and involves state-of-the art classifiers such as support vector machine or random forest. With perClass Mira, a GUI based on the perClass engine, the user is not obliged to have a deep understanding of machine learning and classification techniques, it simply works without the need of relevant knowledge.
Real-time classification using machine vision
In order to demonstrate the potential of the hyperspectral camera for machine vision we placed some samples of different herbs (chamomile, oregano, basil) on a rotary plate in our laboratory.
The hyperspectral snapshot camera S185 SE is placed above these samples. It is equipped with a 23mm lens, which comes with a field of view of 13°. By this it is ensured that the camera is able to capture all necessary details and enough specral information of the samples.
Fig. 3 shows the samples as they are seen by the hyperspectral camera. The spectra in Fig. 4 represent the spectral information of all pixels inside the corresponding rectangles of Fig. 3. The reflectance of the different herbs appears to be very similar across the entire spectrum, challenging the classifier to separate the herbs.
We take some images with our software which shall be used to train the classifier.
After exporting the recorded images to perClass Mira the first step of training is defining three classes for the herbs and one for the background. This is done by simply drawing known pixels as according labels within the image (Fig. 5). Using this reference information the model can be trained and directly applied to the data (Fig. 6). The result shows that for this first attempt the classification works quite good, although some artifacts in form of false classified pixels appear.
In order to improve the model and avoid false classified pixels several techniques are possible, which can be applied interactively in Mira. A simple step in this example is to exclude some of the bands in the beginning as well as in the end of the wavelength region for the model search. This is done by re-defining band start and end (Fig. 7). The reason for this step is to reduce the spectral information that is necessary for the classifier to retreive a result that is acceptable. The model performance consequently is less error-prone.
Once the classifier delivers satisfying results it can easily be exported from perClass Mira interface and integrated to the Cubert Utils software, where it is directly applied to live data stream (Fig. 8). The classifier now performs as expected, especially when considered that it took only a few minutes to generate and optimize it. Most of the pixels are classified correctly (chamomile in purple, basil in blue, oregano in green and background in dark red). The false classified pixels mainly are in the border regions of the herbshells, which was expected as we didn’t define a class for those spectrally mixed pixels. The stability of the classifier is especially shown when the rotary plate is activated (see the video). The pixels still are classified correctly, although the light conditions such as illumination angle for each pixel directly change in each moment.
This example shows the strong potential of hyperspectral snapshot cameras and how intelligent software solutions such as machine vision can be a valuable support for various kinds of applications, especially when working on live data is needed.
Some error has occured.