Tulip’s frontline operations platform supports native machine connectivity, bringing your machine data in context with data from people, devices, and systems. Machine state changes can be monitored through networked connections or with sensors and our edge devices.
In this post, we will explore how our team retrofit a legacy machine, connected it to the cloud, and estimated its state using a simple ML model and Edge IO. At our Somerville headquarters, we have an espresso machine and a water dispenser that were perfect candidates for measuring utilization.
If this sounds familiar, we “launched” a new product on April Fools day from this experiment →
Keeping it as simple as possible, we connected a current and vibration sensor using an Edge IO and PhidgetHub sensor. For the espresso machine, we ran an ML model that detects whether the machine is running, idle, or stopped. This allows us to monitor the usage of the machine and potentially optimize its maintenance and performance.
Read on as we cover the required hardware and software components, as well as the steps to set up and run the machine learning model.
Setting up the hardware
To monitor the coffee machine, we connected an off-the-shelf current sensor to an Edge IO. We also added a piezo vibration sensor to monitor high-speed vibrations. Edge IO supports high-speed monitoring through its onboard ADC inputs.
We chose to connect a PhidgetHub connected to the Edge IO for the current sensor to convert the analog signals from the sensor into the Edge IO.
To collect the data coming from the PhidgetHub in Tulip, we use the standard Phidgets Node-RED nodes for VoltageInput. The espresso machine’s PhidgetHub is directly plugged into the Edge IO, while the water dispenser PhidgetHub is connected via WiFi to the same device, enabling us to track utilization across both devices with a single edge device.
Collecting and processing data
To get the date from Edge IO into the cloud, we used the onboard Tulip Table Node-RED Node. This Node enables data from the edge device to be added to a Table in Tulip. We configured each record to store a Timestamp, Name, and Number Value from the sensor, and collected data from all of the sensors into a single table.
Once the data was in a Table, we exported it to a CSV to load the data using pandas, and start visualizing the data and creating our machine learning model.
We first sorted out the data from the espresso machine sensors and quickly identified the three states of the machine:
Off
Idle/standby
Running/making coffee
Our first observation was quite interesting, as we noticed how much current the machine was using during its Idle state. Every 30 seconds, there is a short spike that decays back down to zero - we determined this to be related to the heating element that keeps the coffee machine at an appropriate temperature.
For modeling the states, our main goals were:
We want to avoid having to label extensive data.
A coffee machine has a finite amount of things it does: grinding beans, heating, drawing a shot, making milk foam. Because of this our sensor data looks different for each work step which should make it possible to find generic features in our data that allow us to cluster these parts of the program into separate states.
These states can then be assigned to Stopped/Running/Idle by an Operator using standard Tulip machine triggers.
To start this process, we ran a moving average to smoothen out the heating cycles that we identified. Because of the length of the cycles, we chose a 1 minute moving average. Heating cycles with this machine tended to last around 30 seconds, and the full coffee cycle is about 2-3 minutes long, depending on the temperature of the machine at the beginning of it being turned on.
Additionally, we included lagged features: moving average from one and two minutes ago. This is a standard method in time series analysis, which allows the model to see rising or falling patterns in the data.
Building the machine learning model
For the clustering model, we wanted something with as few parameters as possible, and a built-in method to estimate the “right” amount of clusters. We chose a bayesian gaussian mixture model.
This helped us estimate that the dimensionality of the sensor data of our coffee machine is proportional to the 6 different work steps (off, grinding beans, heating, drawing a shot, making milk foam, manual handling), thus we chose 6 for the number of separate components in the cluster model.
Since it’s an unsupervised model, we collected data for a day and which was then used to train the model. The validation was done visually - we checked if the clusters that the model detected aligned with the state that was apparent from the sensor data.
From these plots, we can see that State 1 is Off, 4,5 are Idle, and 0,2,3 seem to be various stages of coffee making (grinding, drawing a shot and milk foam making/cooling down).
Deploying the model and building a Tulip App
After building and testing our clustering machine learning model, it was time to deploy the model for use. We stored the model in S3 using MLflow as follows: mlflow.sklearn.save_model(s3://bucket_name/registry_prefix/model_name/version).
To execute the model opted to use an AWS gateway and a AWS Lambda function to fetch the model from S3 in MLflow format. The endpoint was gateway-address.com/inference/{model_name}, where the data consists of the moving average time lagged features.
To execute the model on new incoming data, we had two options: set up an HTTP connector and execute it within a Tulip App, which would require a device to run the app at all times, or execute an HTTP post from within Node-RED and post the cluster id as state machine attribute with standard Tulip Node-RED nodes. We chose the latter option and successfully deployed the model for use.
The Node-RED flow to calculate feature data and execute the model looks as depicted below:
Now the machine is fully setup and Tulip is collecting the data with our model so we can calculate how many coffees are produced per day and how much time the machine spends in Running vs Idle vs Off states.
As you can see above, the model still needed some work, so we set up Tulip Player on an iPad next to the espresso machine running a Tulip app which users can use to give feedback on the model’s accuracy. Additionally, it reminds users to switch off the machine after finishing, to save energy.
With the data from the model, we could then build Dashboards to display our results and utilization:
Additionally, we analyzed some of the raw data to calculate the current consumption per state for a regular work day. We found that the espresso machine uses 80% of current (which approximates energy consumption) when idle, and only 20% of current goes toward making coffee. Since this is a small machine in a non-industrial setting, the amount of waste is not that large.
However, in manufacturing operations there could be many situations where a large amount of energy is wasted from machines idling. Monitoring machine utilization can be key to help be alerted of state changes and respond quicker to an idling machine, saving your team time and reducing energy waste.
Conclusion
To summarize this post, we explored how to retrofit legacy machines and connect them to the cloud to estimate their availability using a simple ML model.
We used an espresso machine located in our office in Somerville, Massachusetts. We covered the required hardware and software components, as well as the steps to set up and run the ML model. The model used a Bayesian Gaussian Mixture Model for clustering (with moving averages and lagged features), and was calculated and executed on an Edge IO.
An App for coffee makers in our office provides an interface to give feedback if the model didn’t detect a coffee being made or was too slow to react.
Additionally, a dashboard which continuously monitors the coffees made at Tulip HQ. We estimated the total current used by each state of the coffee machine, and found only 20% is used for making coffee, the rest is spent idling!
This experiment falls into a series of experiments we have conducted around state detection for machine monitoring, including an experiment while we created Edge IO at Autodesk Technology Centers, Boston. We have continued to explore state detection models on CNC machines used by manufacturers, as well as other household devices and equipment such as a washing machine. Each piece of equipment requires a few minor tweaks in the model’s components and moving averages but we continue to detect states across various cycles.
This work is coming to fruition, and we are excited to be able to share some new offerings from Tulip coming later this year.