Predictive Maintenance is a form of data analysis that uses various IoT devices to assess patterns in machinery that may represent potential problems. Assessing the stages of deteriorating machinery predictive maintenance lets you predict the life cycle of machinery and catch failure before it happens.

Predictive maintenance is predicting when a machine will break down and using that information to schedule maintenance in a way that is more efficient for your technician. An example of predictive maintenance is when you use sensor data or engine data to estimate when a machine failure is going to happen.Â

Preventative maintenance is a maintenance schedule based on a regular and pre-determined cadence - e.g., monthly, annually, or based off of runtime on a machine. An example of preventative maintenance is when you change the oil on your car after ten thousand miles.

Predictive maintenance does is attempt to reduce the frequency of maintenance visits by accurately defining the current state of machinery using various techniques.Â

By employing IoT devices, predictive maintenance techniques will assess large amounts of data generated by these devices. Then, AI and Machine learning models will churn through the data in order to provide predictions on the state of the machinery.Â Some of the various IoT devices used for predictive maintenance include, vibration sensors, microphones, thermal sensors, infrared and ultrasonic sensors.

In this tutorial adopted from this paper submission to the 25th IEEE International Conference on Emerging Technologies and Factory Automation, ETFA 2020, we will attempt to detect unbalance on a rotating shaft based on vibration data. We will train and evaluate a fully-connected neural network on data that has undergone a Fast Fourier Transform (FFT) in Collimator.

The setup of the simulation is as follows. A DC motor is controlled by a motor controller which in turn is hooked to a mass of varying sizes. The three vibration sensors are placed in close proximity to the shaft. The model is shown below:

The entire dataset we will work with is freely available via the Fraunhofer Fortadis data space (https://fordatis.fraunhofer.de/handle/fordatis/151.2).

In total, datasets for 4 different unbalance strengths/weights were recorded (1D/1E ... 4D/4E). As well as one dataset with the unbalance holder without additional weight (i.e. without unbalance, 0D/0E). in this case D corresponds with the development or training set and E corresponding with the evaluation set. The unbalance weights are shown in the table below:

\begin{array} {|r|r|}\hline ID & Radius[mm] & Mass[g] \\ \hline 0D/ 0E & - & - \\ \hline 1D/ 1E & 14 Â± 0.1 & 3.281 Â± 0.003 \\ \hline 2D/ 2E & 18.5 Â± 0.1 & 3.281 Â± 0.003 \\ \hline 3D/ 3E & 23 Â± 0.1 & 3.281 Â± 0.003 \\ \hline 4D/ 4E & 23 Â± 0.1 & 6.614 Â± 0.007 \\ \hline Â \end{array}

We will begin by importing the libraries we will work with:

After importing our libraries we will unzip the data obtained from the Fraunhofer Fortadis data space:

Next we will assign the CSV files to variables for processing.

Next we apply labels and define our samples per seconds and seconds for each analysis. A window of data of which the neural network will make predictions on is defined by the product of the samples per second and the seconds per analysis which are 4096 and 1 respectively.

We create a function to assign the labels to the data for testing. I.e. assigning unbalance (1) vs no unbalance (0) to each data row.

Next we run through each development data file and each evaluation data file to create two big data files that have the correct unbalance.

Now the dataset for training X contains 32226 samples with 4096 values each as well as the associated label information y with 32226 labels (one label per sample). The dataset for validating the trained model X_val contains 8420 samples plus the labels y_val accordingly.

Next we will perform a Train test split of the dataset by using Sklearn. The result will be a set for training and a set for testing.

Next, we will employ Numpy's Fast Fourier Transformation (FFT) process to return each window's Discrete Fourier Transform.

Sklearns Preprocessing library has a Robust Scaler function which works to standardize a dataset by handling outliers properly. Here we eliminate the outliers that are below the 5th quantile and above the 95th quantile range. This helps curb the effect of outliers on skewing the prediction of the neural network.

The initial submission of the paper submission to the 25th IEEE International Conference on Emerging Technologies and Factory Automation, ETFA 2020 tested neural networks with hidden layers ranging from zero to four. What it concluded was that the 4th layer neural networks had the most accuracy at 98.1%

We're now going to train and test and evaluate a 4 layer fully connected neural network on Collimator. We'll begin by defining our Tensorflow neural network, run at 100 epochs and using 4 hidden layers. Then we will use the Tensorflow's fit function to train and validate our model.

â€Ť

â€Ť

At this point we can see that our model has a predictive accuracy of 99% on the training set. The next step will be to run the model against our evaluation data and see how it fairs.

Our model has an accuracy of 97% on data the evaluation data. We can then compare the neural networks guesses with the actual guesses and pinpoint windows that the next struggled with for future testing.