Asset Flow screenshot

Asset Flow

“A check engine light for a factory”

Asset Flow was the alpha prototype I designed and built with a team of engineers and data scientists for tracking and predicting the health of large manufacturing machines. Our data scientists created several models trained on high-frequency raw sensor readings intended to watch for patterns in the data that would lead to specific kinds of machine failure. Long before a breakdown, small anomolies would be happening and our software would alert machine operators to the behavior so parts could be inspected and replaced before much more catastrophic outcomes.

The information architecture of this domain is fairly straightforward: A factory can be represented as a tree of machines, systems, sub-systems, parts, and finally individual sensors (like pressure, temperature, rotation speed, etc).

Each of these sensors generates a timeseries of analog readings that would be ingested frequently, say every second. A particular model would be constructed by watching collections of these sensor readings as well as other timeseries data fed into the model. The model would label each minute of time as being normal or anomalous.

Each model represented a different way the machine had been known to have failed in the past: brake failure, electrical failure, or mechanical failure are some examples. The symptoms leading up to one failure mode might not look anything like what would precede another failure mode.

The big challenge in conceiving of the user experience of this product was: how do you connect the very small level (in time and in asset hierarchy) to the very big (machine health)? While working with the team, these questions didn't have straightforward answers. The 30,000 ft view sounded simple: send notifications to operators when machines were acting weird. But defining the logic of how ti summarize the data at the highest level and what threshold of anomalous behavior to clear before sending notifications wasn't clear. Perhaps that clarity would come later. My intuition was that until that clarity came, the primary mode of interaction for the first version of the UI should be open-ended and exploratory.

The software that operators were currently using allowed for browsing the sensor data, but it was cludgy and slow. You had to seek out particular sensors and load 10 minutes of their data at a time. What we built should be fluid and fast and should allow for search across machines and sensors. Until we could figure out the nuances of the smarts of our software, we would deliver an exploratory tool that did the part the legacy software did but faster and better. The far off vision was a factory running on autopilot. The first step was to give operators anomaly data and historical timeseries machine stats.

The first of two screens in the UI was intended to show a snapshot of today's latest data. The second a browsable timeline allowing for zooming from seconds to years in a few clicks.

Noodle.ai
Industry

Industrial Manufacturing

  • Figma
  • React
  • D3
  • JS