Multi-sensor data fusion using deep learning for bulky waste image classification


Multi-sensor data fusion using deep learning for bulky waste image classification

Bihler, M.; Roming, L.; Jiang, Y.; Afifi, A. J.; Aderhold, J.; Čibiraitė-Lukenskienė, D.; Lorenz, S.; Gloaguen, R.; Gruna, R.; Heizmann, M.

Abstract

Deep learning techniques are commonly utilized to tackle various computer vision problems, including recognition, segmentation, and classification from RGB images. With the availability of a diverse range of sensors, industry-specific datasets are acquired to address specific challenges. These collected datasets have varied modalities, indicating that the images possess distinct channel numbers and pixel values that have different interpretations. Implementing deep learning methods to attain optimal outcomes on such multimodal data is a complicated procedure. To enhance the performance of classification tasks in this scenario, one feasible approach is to employ a data fusion technique. Data fusion aims to use all the available information from all sensors and integrate them to obtain an optimal outcome. This paper investigates early fusion, intermediate fusion, and late fusion in deep learning models for bulky waste image classification. For training and evaluation of the models, a multimodal dataset is used. The dataset consists of RGB, hyperspectral Near Infrared (NIR), Thermography, and Terahertz images of bulky waste. The results of this work show that multimodal sensor fusion can enhance classification accuracy compared to a single-sensor approach for the used dataset. Hereby, late fusion performed the best with an accuracy of 0.921 compared to intermediate and early fusion, on our test data.

Keywords: multispectral data; data fusion; image classification; CNN; multi-stream model; intermediate fusion; late fusion; multi-sensor data; multimodal data

Permalink: https://www.hzdr.de/publications/Publ-38060