Continuous improvement depends on data. You can't fix what you can't see.
For CI and lean practitioners, that principle drives everything—kaizen events, Pareto analysis, root cause investigations. Yet many lean programs operate with a critical blind spot: the gap between when a problem occurs and when help arrives. Operators stand idle. Machines sit still. And the data to prove it—let alone improve it—simply isn't there.
"I know we're losing time but can't prove it. We've got kaizen boards and 5S audits and OEE dashboards. But nobody tracks how long operators wait for maintenance or quality. That time just disappears."This article explains the data gap in lean manufacturing, how Andon systems close it, and what CI teams need from response time data to drive meaningful improvement. If you lead or support continuous improvement and have wondered why your lean tools don't capture operator wait time, this is the gap—and the solution.
The Data Gap in Lean Programs
Lean manufacturing rests on visibility. Value stream mapping, standard work, visual management—all assume you can see the current state. When you can't, improvement becomes guesswork.
MES and PLC systems capture machine-centric metrics: downtime, cycle times, OEE, scrap rates. They answer what happened at the equipment level. But they rarely capture human wait time—the interval between "operator needs help" and "help arrives."
"MES tracks machines, not human wait time. We know exactly when the press stopped. We have no idea when the operator called for help or how long they stood there before anyone showed up."That gap matters because operator wait time is often the larger portion of downtime. A 15-minute machine stoppage might include 10 minutes of waiting for maintenance and 5 minutes of actual repair. OEE shows 15 minutes of unavailability. It doesn't show that 10 of those minutes were preventable with faster response.
Without response time data, lean teams have no baseline for improvement, no way to prioritize kaizen events, and no proof that changes worked. Kaizen events become exercises in assumption rather than data-driven problem solving. Root cause analysis falters when the root cause—delayed human response—is invisible to your systems.
Jidoka and the Need for Human Response Data
Jidoka—built-in quality, or "automation with a human touch"—originates from Toyota. The idea: stop the line when something is wrong, fix it at the source, prevent defects from propagating.
Andon cords and lights are the classic Jidoka mechanism. An operator pulls the cord or presses a button; the line stops; help is summoned. The principle is sound. The problem is execution.
"Operators are walking 10-plus minutes to find help. By the time they track someone down, the urgency is gone. And we have no record of any of it."Jidoka assumes help arrives quickly. When it doesn't, the system breaks down. Operators stop calling. They work around problems. Or they abandon their station to find help—creating new waste and safety risk.
Response time data makes Jidoka actionable. It tells you whether help actually arrives promptly, which areas wait longest, and where escalation fails. That's the data CI teams need to improve the system, not just document that the line stopped.
Traditional Andon—pull cords and stack lights—implemented the principle decades ago. But visual signals only work when someone is watching. In a noisy plant with mobile responders, stack lights get missed. In a large facility, supervisors can't see every light. Modern wireless Andon preserves the Jidoka principle while ensuring every call reaches the right person, tracks response time automatically, and generates the data lean programs require.
How Andon Fills the Gap MES Leaves
Manufacturing Execution Systems excel at production visibility: scheduling, traceability, quality, OEE. They aggregate data across machines, shifts, and products. What they typically do not capture is the human coordination layer.
| Data Type | MES | Andon System | Manual Tracking |
|---|---|---|---|
| Machine downtime | ✓ Full visibility | Partial (if integrated) | Inconsistent |
| Time to respond (TTR) | ✗ Not captured | ✓ Per-call automatic | Estimated, incomplete |
| Time to fix (TTF) | ✗ Not captured | ✓ Per-call automatic | Estimated, incomplete |
| Calls by station | ✗ | ✓ Real-time + historical | Rarely tracked |
| Calls by type (maintenance, quality, etc.) | ✗ | ✓ | Sometimes |
| Shift comparisons | ✓ | ✓ | Rarely |
| Pareto by station | ✗ | ✓ Within 48 hours | Manual, delayed |
| CSV/BI export | ✓ | ✓ | Manual only |
| Deployment timeline | 6–18 months | Hours to days | Immediate but unreliable |
"Our CI team runs kaizen events quarterly. We used to pick focus areas based on gut feel. Now we pull the Pareto chart—which stations generate the most calls, which call types take longest—and we have data to back up every decision."Manual tracking—operators or supervisors logging times on paper or spreadsheets—can theoretically capture similar data. In practice, it rarely does. Entries are estimated, incomplete, and forgotten when production gets busy. The discipline required for accurate manual tracking is high; the payoff is low compared to automatic capture. Andon systems eliminate that burden while delivering complete, timestamp-accurate data.
Metrics That Matter for CI Teams
For continuous improvement, not all metrics are equal. CI teams need data that supports action.
Time to Respond (TTR)
TTR measures the interval from button press to responder arrival. It reflects notification effectiveness and staffing availability. High TTR suggests escalation problems, understaffing, or geographic layout issues.
Time to Fix (TTF)
TTF measures the interval from responder arrival to call closure. It reflects repair difficulty, training, and parts availability. Separating TTR from TTF is critical—a 20-minute TTR with 5-minute TTF points to staffing; a 5-minute TTR with 20-minute TTF points to repair capability.
Calls by Type
Breaking calls into maintenance, quality, materials, and supervision reveals where bottlenecks concentrate. If quality calls wait three times longer than maintenance calls, that's a staffing or process imbalance worth investigating.
Pareto by Station
"The Pareto chart showed us Station 14 was generating 30% of all calls. We'd never have guessed that without the data. One kaizen event focused on that station, and our call volume dropped by a third."Stations that generate disproportionate call volume often indicate equipment issues, training gaps, or support allocation problems. Pareto analysis identifies the vital few for focused improvement.
Shift-by-Shift Comparisons
"Shift-by-shift comparisons revealed our second shift had double the response time of first shift. Same equipment, different staffing. We adjusted our coverage and saw immediate improvement."When response time differs significantly by shift, the cause is usually staffing or support structure—not equipment. The data makes the case for change.
Call Volume Trends
Tracking call volume over time—daily, weekly, monthly—reveals whether improvement efforts are working. A station that previously generated 20 calls per week and now generates 8 has clearly improved. Without baseline and ongoing measurement, you can't quantify that success or sustain accountability.
Implementation for Lean-Focused Plants
CI managers evaluating Andon often ask how to align deployment with lean principles. The answer: start small, measure baseline, iterate.
Start With One Line
Pilot on a single production line or department. Choose an area where improvement potential is clear—high call volume, long suspected wait times, or a team eager to participate. A focused pilot generates data quickly and builds credibility for broader rollout.
Measure Baseline
Capture 2–4 weeks of data before making changes. Establish current TTR, TTF, call volume by type and station. This baseline enables before-and-after comparison and credible ROI calculation. It also identifies the highest-impact improvement targets.
Use Data for Kaizen Events
"We use the response time reports for every kaizen event now. We know exactly which stations to focus on, what types of calls dominate, and we can measure success with real numbers."Feed Andon data into your kaizen planning. Use Pareto charts to select focus areas. Use TTR/TTF separation to diagnose whether the problem is notification, staffing, or repair. Document baseline metrics, implement changes, then measure again.
Iterate and Expand
Once the pilot demonstrates value, expand to additional lines or shifts. Use lessons learned to refine escalation, staffing, and support structure. The data compounds—more coverage means more insight into plant-wide patterns.
Align With Your Lean Cadence
Integrate Andon data into existing lean rituals. Daily standups can review yesterday's response times. Weekly Gemba walks can include Pareto review. Monthly CI reviews can use trend data to select kaizen themes. The system supports your cadence rather than requiring a new one.
What We Offer: MMCall Andon System 4.0
MMCall Andon System 4.0 is built for plants that need response time data for lean improvement—without waiting for a full MES deployment.
Key capabilities for CI/Lean teams:
- Real-time Pareto charts available within 48 hours of deployment—identify top bottleneck stations immediately
- Response time data by station, shift, and call type—CSV export for BI tools and custom analysis
- Existing customers see 10–12% downtime reduction on average after implementation
- Deployed in 1,000+ lean manufacturing environments across automotive, food & beverage, plastics, and general manufacturing
- 60-day free trial—full equipment, remote setup, training included; return if not satisfied
Frequently Asked Questions
How does Andon data integrate with our existing lean tools?
Andon systems capture response time (TTR) and repair time (TTF) automatically. Data exports to CSV for use in Excel, Power BI, Tableau, or your MES. Many plants display Andon Pareto charts alongside OEE and production dashboards. The REST API enables direct integration with existing systems.
Can we use Andon data for kaizen event preparation?
Yes. Pareto charts show which stations generate the most calls and which call types dominate. Shift-by-shift comparisons reveal staffing gaps. TTR vs. TTF separation helps diagnose whether the bottleneck is notification, response, or repair. This data provides evidence-based focus for kaizen events and measurable before-and-after outcomes.
Does MES replace the need for Andon?
No. MES tracks machine performance—downtime, OEE, throughput, quality. Andon tracks human coordination—how long operators wait for help. They complement each other. MES shows what happened; Andon shows why operators waited. Plants with both often feed Andon data into MES or BI for a complete view of downtime.
What if we're not ready for a full plant rollout?
Start with a pilot. One production line or department is enough to generate baseline data and demonstrate value. Typical pilots run 2–4 weeks. Use the results to build the case for expansion and refine your implementation approach before broader rollout.
How quickly can we get actionable data?
With a wireless Andon system, deployment typically takes hours to days. Real-time dashboards show current status immediately. Pareto charts and response time reports are available within 48 hours of the system going live. You don't need to wait months for data—baseline metrics are available within the first week. For CI teams accustomed to multi-month MES or ERP projects, the speed of Andon deployment is often a surprise. Data for your next kaizen event can be in hand within days.
Related resources: