Sarivan, I-M , Greiner, Johannes N. , ALVAREZ LORENZO, DANIEL, Euteneuer, F. , Reichenbach, M. , Madsen, O. , Bogh, S.
No
Procedia Manufacturing
Proceedings Paper
Científica
0.504
01/01/2020
000863680700052
In this paper, we present a novel method for utilising wearable devices with Convolutional Neural Networks (CNN) trained on acoustic and accelerometer signals in smart manufacturing environments in order to provide real-time quality inspection during manual operations. We show through our framework how recorded or streamed sound and accelerometer data gathered from a wrist-attached device can classify certain user actions as successful or unsuccessful. The classification is designed with a Deep CNN model trained on Mel-frequency Cepstral Coefficients (MFCC) from the acoustic input signals. The wearable device provides feedback on three different modalities: audio, visual and haptic; thus ensuring the worker's awareness at all time. We validate our findings through deployments of the complete AI-enabled device in production facilities of Mercedes-Benz AG. From the conducted experiments it is concluded that the use of acoustic and accelerometer data is valuable to train a classifier with the purpose of action examination during industrial assembly operations, and provides an intuitive interface for ensuring continued and improved quality inspection. (C) 2020 The Authors. Published by Elsevier Ltd.
Smart Wearable Devices; Deep Learning; CNN; MFCC; Sound Classification; Smart Manufacturing; Quality Inspection