You can't improve what you don't measure.
That principle applies to response time in manufacturing as much as anywhere else. Yet most plants have no idea how long operators wait for help when they call for it. The time simply disappears—untracked, unrecorded, invisible.
This article explains what response time tracking is, why it matters for continuous improvement, and how to use the data once you have it.
What Is Response Time in Manufacturing?
Response time is the elapsed time between when a problem is identified and when help arrives.
This is different from machine downtime, which your MES or PLC probably tracks already. Response time measures human coordination: how quickly does someone respond when an operator calls for help?
The response could be for maintenance, quality, materials, engineering, or supervision. The metric is the same: time from call to arrival.
Why This Metric Matters
Every minute an operator waits is a minute of lost productivity. Unlike machine downtime—which appears in production reports—wait time for human response is often invisible.
"No metrics tracking."
That's how one manufacturer described their legacy system. Calls went out, responses happened, but there was no record of how long anything took. Without data, improvement is guesswork.
Time to Respond vs. Time to Fix
Effective tracking separates two distinct time periods.
"It stops the initial timer to let you know how long they waited for the call, but it's going to start a new timer to let you know how long they're working on that issue for."
That explanation from an HVAC manufacturer captures the framework:
Wait Time (Time to Respond)
Starts: When the operator presses the button Stops: When the responder arrives and acknowledges
This measures how long the operator waited before help showed up. It reflects your notification system effectiveness and staffing availability.
Repair Time (Time to Fix)
Starts: When the responder acknowledges arrival Stops: When the issue is resolved and the call is closed
This measures how long the actual fix took. It reflects responder skill, parts availability, and problem complexity.
Why Separating Them Matters
A call that takes 25 minutes total could be:
- 20-minute wait + 5-minute repair (notification/staffing problem)
- 5-minute wait + 20-minute repair (training/equipment problem)
As one logistics operation explained their need: "We want to be able to track when they get there. And then how long the actual incident takes to complete."
Two separate metrics. Two different improvement paths.
Why Tracking Matters for Continuous Improvement
"We'd be looking obviously for improvement over that. I'd like to have call buttons that give you the option to track the metric."
That sentiment—wanting to improve, needing data to do it—drives most interest in response time tracking.
Data Enables Improvement
Without measurement, you can't answer basic questions:
- Are we getting better or worse?
- Did that new process help?
- Where should we focus improvement efforts?
Data Drives Kaizen Events
Continuous improvement events need a starting point. What's the current state? Where are the biggest opportunities?
Response time data identifies targets. "Station 14 averages 12 minutes wait time—three times the plant average. Why?" That's a focused investigation with measurable outcomes.
Data Proves ROI
When you implement changes, tracking proves whether they worked. Response time dropped from 8 minutes to 4 minutes after adding a second maintenance tech on second shift. That's quantifiable value.
Manual Tracking vs. Automatic Tracking
There are two approaches to capturing response time data. They differ dramatically in accuracy and effort.
Manual Tracking
Some plants try to capture response times manually:
- Operators write down when they call
- Responders log when they arrive
- Someone enters the data into a spreadsheet
Relies on human memory. People estimate after the fact. A 12-minute wait becomes "about 10 minutes" becomes "not that long."
Requires discipline. Everyone has to log consistently. When things get busy, logging is the first thing skipped.
Creates extra work. Operators and responders have jobs to do. Asking them to also track times adds burden.
Incomplete data. Calls get missed. Entries are forgotten. The dataset is always partial.
Automatic Tracking
Modern tracking systems capture times automatically:
"Gives you the metrics of how long it takes them to answer the call, how long..."
When an operator presses a button, a timer starts. When a responder acknowledges, the timer stops and a new one starts. When the call closes, the second timer stops.
No human intervention required. Every call captured. Accurate timestamps.
The difference in data quality is substantial. Automatic tracking produces complete, accurate datasets. Manual tracking produces estimates with gaps.
Using Data for Pareto Analysis
Once you have response time data, Pareto analysis helps focus improvement efforts.
The 80/20 Principle
Typically, a small number of sources account for most of the problem:
- A few stations generate most calls
- A few call types take longest to resolve
- A few shifts have the slowest response
Questions to Ask
Which stations generate the most calls? High call volume might indicate equipment problems, training gaps, or inadequate support.
Which call types take longest to respond to? If quality calls wait three times as long as maintenance calls, you may have a staffing imbalance.
Which shifts have the slowest response? Second and third shifts often have fewer support staff. The data shows whether this creates response time gaps.
Which responders are fastest? Not for punitive purposes—to understand what they're doing differently that others could learn.
Fix the Big Problems First
If Station 14 generates 30% of all calls, improving Station 14 has outsized impact. Response time data tells you where to look.
Setting Response Time Targets
What's a "good" response time? The answer depends on your operation.
Factors That Affect Targets
Facility size. A 50,000 square foot plant is different from a 500,000 square foot plant. Travel time alone affects what's achievable.
Call type. Safety calls demand faster response than material requests. Different categories may have different targets.
Staffing. Response time is constrained by available responders. Targets must be realistic given resources.
Cost of delay. High-value production lines justify investment in faster response. Lower-stakes areas may accept longer times.
Start With Current State
Before setting targets, measure where you are. If average response time is 15 minutes, a 3-minute target isn't realistic tomorrow.
A typical progression:
- Measure current state (e.g., 15 minutes average)
- Set initial improvement target (e.g., 10 minutes)
- Implement changes
- Measure results
- Set new target
- Repeat
Track Progress Over Time
Single snapshots tell you where you are. Trends tell you whether you're improving.
Monthly or weekly averages tracked over time show:
- Whether changes are working
- Seasonal or shift patterns
- Drift from targets
Connecting Response Time to OEE
Overall Equipment Effectiveness (OEE) measures manufacturing productivity:
OEE = Availability × Performance × Quality
Response time directly affects Availability. Equipment isn't producing while operators wait for help.
Quantifying the Connection
If average response time is 10 minutes and you have 50 calls per day:
- 50 calls × 10 minutes = 500 minutes of wait time
- 500 minutes ÷ 60 = 8.3 hours of operator wait time daily
Integrating Data Sources
Some operations connect response time data with production data:
- Correlation between call patterns and production shortfalls
- Impact of specific call types on line output
- Shift-by-shift response time vs. productivity
Frequently Asked Questions
What's a good response time target?
There's no universal standard. Industrial benchmarks vary widely. Start by measuring your current state, then set improvement goals relative to that baseline. Five-minute average response is excellent; fifteen minutes is common but often improvable.
How do we start if we have no data?
Begin by measuring. Even a few weeks of data reveals patterns. You don't need a full system to start—but automatic tracking produces much better data than manual methods.
Who should see this data?
Different stakeholders need different views:
- Floor supervisors: real-time status, today's metrics
- Managers: trends, comparisons, exception reports
- CI teams: detailed data for analysis
Does tracking change behavior?
Often, yes. The act of measuring tends to improve performance—at least initially. People respond faster when they know times are recorded. This "observer effect" is a feature, not a bug.
The challenge is sustaining improvement. Data alone doesn't create change; it enables accountability and informs action.
Taking Action on Data
Response time tracking transforms invisible waste into visible, improvable metrics.
"Concerned about metrics. Essentially."
That's what drives manufacturers to track response time. They know improvement requires measurement. They know decisions need data.
The technology to capture this automatically exists. The question is whether response time matters enough to your operation to invest in tracking it.
Related reading: