AI RECOMMENDED ACTIONS

MAINTAINX

MAINTAINX

AI RECOMMENDED ACTIONS

In industrial maintenance, technicians are the backbone of uptime. However, they are often sent into the field with incomplete data.

This project focused on building an AI-enabled recommendation engine within work orders to reduce guesswork, leverage historical tribal knowledge, and decrease machine downtime.

The problem

Technicians frequently receive work orders with minimal documentation. Without easy access to machine history or specific manuals, they spend significant time:

  • Manually searching through physical manuals or old reports.

  • Attempting standard procedures when a specific, historical fault might be the actual culprit.

  • Flagging down supervisors or senior techs to fill in the blanks.

All of the above ends up compounding into High MTTR (mean time to repair), increasing operational costs significantly.

The hypothesis

By synthesizing work order history, comments, and manuals, we can provide AI-recommended actions directly within the work order.

If we surface specific shortcuts, such as identifying a recurring fault seen in similar machines, we can help technicians bypass generic troubleshooting and move straight to the solution, saving hours of labor per job.

The Solution: Intelligence at the point of work

When technicians start working on a specific repair, they are introduced to Recommended actions integrated directly in their work execution user journey, ensuring they have exactly what they need, when they need it.

Key Features

  • Historical pattern recognition: The AI flags if this specific machine (or similar models in the organization) has failed in this way before, and surfaces the information if relevant.

  • Manual synthesis: Instead of a 200-page PDF, the AI extracts the specific calibration steps relevant to the reported fault.

  • Peer insights: Surfacing comments from previous interventions that faced similar issues, turning tribal knowledge into a shared company asset.

Tying design to ROI

To validate the value of these AI recommendations, I built a feedback loop that tracks the effectiveness of each suggestion. By tracing the usage of recommended actions, we can correlate the feature's usage with three KPIs:

Metric

Definition

AI Impact Goal

MTTR

Mean Time To Repair

Decrease by reducing diagnostic "guesswork" time.

MTBF

Mean Time Between Failures

Increase by suggesting the most effective long-term fix rather than a quick patch.

Asset Downtime

Total time a machine is non-functional

Minimize by getting the machine back to a "Ready" state faster.

By tracing each recommended actions and tying them to these KPIs, we can compound the business value of this feature by:

  1. Continue training the model based on recommended action success.

  2. Expose those successes to technicians and managers alike, creating a positive feedback loop in their organization.

  3. Tie it directly to ROI, easily creating real business cases.

Reflections & Lessons

  • Creating trust comes first: Technicians can be skeptical of AI. The experience had to convey why it was making a recommendation (e.g., "Based on a similar fault in Machine B-12 last month"). Otherwise, we risk turning away our users forever.

  • Data quality matters: An LLM is only as good as the context it’s provided. This led to adjacent initiatives, such as creating opportunities for technicians to document their work effortlessly, further reinforcing the data moat.

Challenges & user insights

1. Moving from "Black Box" to "Open Book"

Technicians are experts with field intuition. If an AI simply says "Replace Part X" without explanation, the technician will likely ignore it because they don't know the reasoning behind it.

"I realized early on that transparency was more important than accuracy for initial adoption. I designed the 'Why' into the UI. Instead of just showing a recommendation, I included evidence markers. For example, the card would say: 'Recommended: Check Pressure Valve (Based on 3 similar faults in the last 6 months and Page 42 of the Service Manual).' This transformed the AI from a 'boss' telling them what to do into a 'researcher' providing them with supporting data."

2. Designing for "Human-in-the-Loop" Feedback

AI models need feedback to improve, but technicians are busy and won't fill out surveys.

"Trust is a two-way street. I designed a low-friction feedback loop directly into the work order flow. Technicians could 'Accept' or 'Dismiss' a recommendation with a single tap. If they dismissed it, I provided a quick-select list of reasons (e.g., 'Not relevant,' 'Already tried'). This did two things: it made the technician feel in control of the tool, and it provided the data science team with the 'ground truth' labels needed to retrain the model for better future accuracy."

3. Communicating confidence without creating doubt

AI is probabilistic, not deterministic. If you show a "60% confidence" score, a technician might think the system is broken.

"We had a lot of internal debate about showing percentage scores. In testing, I found that '82% Confidence' felt arbitrary to a user under a machine. I pivoted the design to use qualitative tiers—categorizing suggestions as 'Primary Suggestion' or 'Alternative Considerations.' This matched the way technicians naturally troubleshoot: they have a Plan A, but they keep a Plan B in mind. The UI reflected their mental model rather than the model's math."

4. The importance of sources

Technicians often don't trust AI-generated summaries of technical text because hallucinations can be dangerous.

"To bridge the trust gap, I insisted on 'Deep Linking' to sources. If the AI recommended a specific calibration procedure from a manual, the UI included a direct link that opened the PDF to that exact page. By providing the source material alongside the AI insight, we reduced the cognitive load of searching while giving the technician the 'safety net' of the original documentation. It proved that the AI wasn't 'hallucinating'—it was just being a very fast librarian."