Fully learnable deep wavelet transform for unsupervised monitoring of high-frequency time series
View ORCID ProfileGabriel Michau, View ORCID ProfileGaetan Frusque, and View ORCID ProfileOlga Fink
See all authors and affiliations
PNAS February 22, 2022 119 (8) e2106598119; https://doi.org/10.1073/pnas.2106598119
- Edited by David Donoho, Department of Statistics, Stanford University, Stanford, CA; received April 7, 2021; accepted January 10, 2022
Monitoring of industrial assets often relies on high-frequency (HF) signal measurements. One difficulty of dealing with such measurements in the industrial context is the conciliation between the high-frequency sampling and low-dimensional decision states (e.g., healthy/unhealthy), in a context where, very often, labels are not available. Here, we propose a fully unsupervised deep-learning framework for high-frequency time series that is able to extract meaningful and sparse representation of raw signals and is able to handle different lengths of time series flexibly, overcoming thereby several of the limitations of existing deep-learning approaches. The decomposition framework will be very useful for handling in an automatic manner high-frequency signals and is an important basis for future applications with HF data.
High-frequency (HF) signals are ubiquitous in the industrial world and are of great use for monitoring of industrial assets. Most deep-learning tools are designed for inputs of fixed and/or very limited size and many successful applications of deep learning to the industrial context use as inputs extracted features, which are a manually and often arduously obtained compact representation of the original signal. In this paper, we propose a fully unsupervised deep-learning framework that is able to extract a meaningful and sparse representation of raw HF signals. We embed in our architecture important properties of the fast discrete wavelet transform (FDWT) such as 1) the cascade algorithm; 2) the conjugate quadrature filter property that links together the wavelet, the scaling, and transposed filter functions; and 3) the coefficient denoising. Using deep learning, we make this architecture fully learnable: Both the wavelet bases and the wavelet coefficient denoising become learnable. To achieve this objective, we propose an activation function that performs a learnable hard thresholding of the wavelet coefficients. With our framework, the denoising FDWT becomes a fully learnable unsupervised tool that does not require any type of pre- or postprocessing or any prior knowledge on wavelet transform. We demonstrate the benefits of embedding all these properties on three machine-learning tasks performed on open-source sound datasets. We perform an ablation study of the impact of each property on the performance of the architecture, achieve results well above baseline, and outperform other state-of-the-art methods.