« Back to Glossary Index

MOS (Mean Opinion Score) is one of the most widely used ways to summarize VoIP call quality into a single number—so teams can quickly spot when voice performance is improving, degrading, or breaking by route, location, or time of day. In this glossary guide, we’ll explain what MOS is, what the score ranges actually mean, what affects MOS (jitter, packet loss, latency, codecs, and routing), and how to use it in real operations to monitor quality and troubleshoot issues before they turn into complaints, lost deals, or support escalations.

What Is MOS (Mean Opinion Score)?

MOS (Mean Opinion Score) is a 1-to-5 score used to represent how good a voice call sounds to a listener, with higher numbers meaning better perceived quality.

mos (mean opinion score)

MOS (mean opinion score) measures perceived call quality based on either ratings (in formal testing) or modern algorithmic estimates used in VoIP monitoring. What it doesn’t tell you is the exact root cause by itself. A low MOS can be caused by many factors (jitter, packet loss, latency, codec/transcoding, Wi-Fi issues, routing), so you use MOS as the headline signal, then investigate the underlying network and media metrics to find what’s actually driving the score.

MOS Scale and What the Numbers Mean

MOS (mean opinion score) typically runs from 1 to 5, where 1 is “unusable” quality and 5 is “excellent, natural-sounding” voice. In plain terms, a higher MOS means the call is easier to understand, feels more natural, and requires less effort from the listener.

Here are practical bands you can use as guidance (not absolute law):

  • 4.3–5.0 (Excellent): Clear, natural conversations with minimal effort.

  • 4.0–4.3 (Good): Business-ready quality; most teams are happy here.

  • 3.6–4.0 (Fair/Acceptable): Understandable but noticeably “VoIP-ish” at times; may hurt longer sales/support calls.

  • 3.1–3.6 (Poor): Frequent complaints, repetition, reduced trust.

  • < 3.1 (Bad): Calls feel broken; expect escalations and lost outcomes.

Why MOS is affected by the full call path, not just your provider. Different endpoints or different codecs, and routes/destinations (local vs international, carrier hops, congestion) can all change jitter, packet loss, and latency—so MOS can vary by rep, location, network type (Wi-Fi vs wired), and destination even within the same outbound campaign.

How MOS Is Measured

Subjective MOS

Subjective MOS is the original meaning of MOS: real people listen to recorded speech samples and rate the quality on a 1–5 scale, then the scores are averaged (“mean opinion”). This method is valuable because it reflects true perception—including things that feel annoying in conversation.

It’s not practical for continuous monitoring because it’s slow, expensive, and not scalable. You can’t run listening panels every time a call route changes or a Wi-Fi network gets congested, so modern VoIP operations rely on objective estimates instead.

Objective MOS (algorithmic estimates)

Objective MOS exists so teams can estimate call quality automatically and continuously—per call, per route, per office, per time block—without rating every sample. Monitoring platforms use models that take network and audio signals and convert them into a MOS-like score that’s operationally useful.

Common approaches (high-level):

  • E-model / R-factor–based estimation: Uses network impairment factors (like latency, jitter, and loss) to produce a rating factor (R), which is then mapped into an estimated MOS (mean opinion score).

  • Perceptual speech quality models: Audio-focused models that evaluate how speech would sound to a listener (often using reference-like methods or signal analysis) and output a MOS-style estimate.

Why different systems can show different MOS for the same call: providers and monitoring tools may use different models, different input signals (network stats vs audio analysis), different sampling windows, or different weighting of impairments. So MOS is best used as a trend and comparison tool (baseline vs today, Route A vs Route B), rather than a single “universal truth” number.

What Affects MOS in VoIP

Network factors

MOS is highly sensitive to latency, jitter, and packet loss because they directly shape how speech feels in real time. Latency makes conversations awkward (people talk over each other), jitter creates choppy/robotic audio when packets arrive unevenly, and packet loss causes clipped words and missing syllables. Even small amounts can be noticeable because voice is continuous.

Also, spikes hurt more than averages. A link that looks “fine on average” can still produce terrible calls if it has short bursts of loss or jitter during congestion—those bursts break words and force repetition. Wi-Fi adds another layer: roaming between access points, interference, and crowded channels can create micro-outages and jitter spikes. And the last mile matters: home networks, ISP variability, and peak-hour congestion can all degrade MOS even when your PBX and provider are healthy.

Codec and audio processing factors

Codecs affect perceived quality because they trade bandwidth for compression. Higher-compression codecs can sound worse under poor network conditions and may hide artifacts until the network gets stressed. MOS (mean opinion score) can also drop when calls go through transcoding—converting between codecs across systems or carriers—because each conversion can introduce artifacts and reduce clarity.

Two other quiet influencers are packetization interval and jitter buffer behavior. Larger packetization can increase delay and make loss more painful (losing one packet loses more audio), while jitter buffers smooth jitter but add latency. If buffers are too small, jitter becomes choppy audio; too large, calls feel delayed.

Route and interconnect factors

Your call path matters. International calling often crosses multiple carrier networks and interconnect points, and MOS can vary by destination because routes differ in latency, loss characteristics, and congestion. Even within the same provider, routes can change over time due to optimization, failover, or cost/routing decisions—so MOS may shift by destination and time block even when nothing changed inside your office.

Congestion and route changes are why it’s useful to segment MOS by country/region, carrier route, and time-of-day—it helps you identify whether the degradation is local (your network) or path-based (interconnect/routing).

Endpoint factors

Not all quality issues are “network.” Poor headsets, bad microphones, echo cancellation quirks, and even device CPU load can reduce perceived quality and lower MOS—even when network stats look fine. On softphones and mobile apps, background restrictions (sleep modes, battery optimization) can cause glitches, delayed audio start, or intermittent media issues that show up as lower quality and more audio incidents.

MOS vs Jitter vs Packet Loss vs Latency

Think of MOS as the headline score—a single number that summarizes how a call likely sounded. Jitter, packet loss, and latency are the root signals that usually explain why MOS is high or low.

Metric What it measures What users experience What to check first
MOS Overall estimated perceived call quality (1–5) “Calls sound good/bad today” Look at jitter/loss/latency trends by site, route, destination, time
Jitter Variation in packet arrival timing Robotic/choppy audio, uneven speech Wi-Fi quality, LAN congestion, QoS, uplink contention
Packet loss Voice packets that never arrive Missing words, clipped sentences, frequent “repeat that?” WAN stability, overloaded links, router/firewall load, uplink saturation
Latency End-to-end delay Awkward pauses, talking over each other Routing path/VPN hairpins, long-distance routes, WAN congestion

What Is a “Good” MOS Score?

A “good” MOS score depends on what kind of call you’re measuring and what your baseline looks like. In general, most teams aim for good-to-excellent MOS on business calls, but the reality is that MOS will naturally vary by endpoint, network path, and destination—so it’s better to use MOS as an operational signal (trend + comparison) rather than a rigid pass/fail number.

Practical benchmarks by context

  • Internal office calls (same LAN/WAN environment)
    These usually have the best conditions (shorter path, fewer carrier hops), so you’d expect MOS (mean opinion score) to be consistently high. If internal calls score lower than expected, it often points to local network issues (Wi-Fi congestion, QoS, uplink saturation) or endpoint problems (headsets, device performance).

  • External PSTN calls (to customers, mobile, carriers)
    These typically run through more interconnect points and variable last-mile networks, so MOS can be slightly lower or more variable than internal calls. The key is consistency: big swings by time-of-day or destination usually indicate routing or congestion changes.

  • Sales calls vs support/call center calls
    Sales and discovery calls are more sensitive to subtle quality issues because trust and conversational flow matter—latency and choppy audio can reduce confidence and shorten calls. Support/call center calls can sometimes tolerate “okay” quality, but volume and repetition make quality problems a more expensive operationally (more repeats, longer handle time, lower CSAT). In both cases, stable MOS matters more than chasing perfection.

  • International routes
    International calling is where MOS (mean opinion score) naturally varies the most due to distance, routing complexity, and carrier handoffs. “Good” here often means: stable, predictable quality per region, and the ability to spot and fix underperforming destinations/routes quickly.

Limitations of MOS

MOS is useful because it compresses call quality into a single score. Treat MOS (mean opinion score) as an early warning and comparison signal, not a complete diagnosis on its own.

It is an estimate, and different tools can calculate it differently. Some systems lean more on network impairment models, others analyze audio characteristics, and they may sample over different time windows. That means two platforms can show slightly different MOS for the same call—even when both are directionally correct.

That’s why the best practice is to pair MOS with root signals (jitter, packet loss, latency) and incident metrics like one-way audio rate, audio drops, and call disconnects. MOS tells you quality changed; incident metrics and network signals tell you how it failed and where to fix it.

How to Improve MOS

Improving MOS usually comes down to reducing the things you notice most in conversation: choppiness (jitter/loss) and awkward delay (latency). The best approach is to fix quality from the inside out—local network first, then routing—and verify changes with before/after measurements.

Network hygiene

Start with the basics that move MOS (mean opinion score) the fastest. If users are on Wi-Fi and quality is inconsistent, test on Ethernet—wired connections eliminate a huge chunk of jitter and loss. Then reduce contention: large uploads, backups, and video meetings can starve voice traffic during peaks. Finally, apply QoS so SIP/RTP gets prioritized; VoIP doesn’t need massive bandwidth, but it needs stable delivery.

Wi-Fi improvements

If you must run voice over Wi-Fi, treat Wi-Fi like production infrastructure. Poor coverage and crowded channels create jitter spikes that lower MOS even when speed tests look fine. Improve access point placement, avoid congested channels, and minimize roaming disruptions—especially for softphones and mobile clients that move around offices.

Reduce loss and jitter

Packet loss and jitter are the quickest way to tank MOS. A common cause is uplink saturation (cloud backups, file sync, camera uploads). Prioritize RTP media traffic and keep voice flows direct; routing calls through VPN hairpins or long, indirect paths can add delay and variability. If your MOS drops at specific times, focus on what else is happening on the network at those times.

Codec strategy

Codec chaos can quietly lower MOS. Aim for a stable codec strategy across endpoints, PBX, and trunks—and avoid transcoding wherever possible. Each extra conversion can introduce artifacts and reduce clarity. Keep the codec list consistent and predictable, especially across sites and remote users.

Route strategy

If MOS is consistently poor for certain destinations or regions—while local calls are fine—the bottleneck is often the route. Test alternate routing options for those destinations and compare MOS/jitter/loss/latency by route. Route-level testing is one of the fastest ways to improve international call quality without changing anything inside your LAN.

Continuous improvement (baseline → test → verify → document)

Treat MOS improvement like a loop, not a one-time fix. Establish a baseline by site/route/time block, make one change at a time, verify with before/after calls, then document what worked. Over time you’ll build a repeatable playbook—and you’ll spend less time firefighting “random bad calls” because you’ll know exactly where quality degrades and how to correct it.

FAQ

What does MOS stand for?

MOS stands for Mean Opinion Score—a score used to represent how a voice call sounds to a human listener, typically on a 1–5 scale.

What is MOS in VoIP?

In VoIP, MOS is a call quality score that summarizes how clear and natural a call likely sounded. It’s commonly used in monitoring dashboards to track quality by site, route, time block, or destination.

What MOS score is considered good for VoIP?

As a practical guideline, 4.0+ is generally “good,” with higher scores indicating clearer, more natural audio. What’s “good” depends on your baseline, codec, and routes—so it’s best to compare MOS by trend and segment rather than treating one number as universal truth.

Is MOS subjective or objective?

Originally MOS was subjective (human listening tests). In modern VoIP operations, MOS is usually objective—an algorithmic estimate based on network/media signals—so you can monitor quality at scale.

Why can MOS be high but calls still sound bad sometimes?

Because MOS is an average estimate and may miss brief but painful issues like short dropouts, echo, double-talk, or peak-hour spikes. That’s why you should pair MOS with jitter, packet loss, latency, and incident metrics like one-way audio and drops.

How do I improve MOS quickly?

Start with the biggest wins: move key users to wired connections where possible, reduce network contention during peaks, enable QoS for voice, fix uplink saturation, and avoid unnecessary transcoding. If quality issues are destination-specific, test alternate routes for those regions and monitor MOS before/after.

What’s the difference between MOS and jitter/packet loss/latency?

MOS is the headline score (how the call likely sounded). Jitter, packet loss, and latency are root signals that explain why MOS changes and what to fix.

Should I use MOS alone to judge call quality?

No. Use MOS (mean opinion score) as an early warning and comparison metric, but make decisions using MOS plus jitter/loss/latency and incident rates (one-way audio, audio drops, disconnects).

« Back to Glossary Index