When apps feel slow or randomly drop connections, everyone blames “the network,” but figuring out what’s really wrong is not so easy. Is it your code, your ISP, your cloud provider, or the path in between?
This guide walks through using mtr for practical network troubleshooting in cloud hosting, on-prem, and hybrid setups, including AWS and other providers.
You’ll learn how to install MTR, run the right kind of test, read the output, and decide whether you’re seeing real packet loss and latency or just noisy statistics.
Think about a classic support ticket: “The site is slow.”
You run ping. It looks fine. You run traceroute. Kind of messy, but nothing obvious. Users still complain.
MTR (also called mtr on Linux/macOS and WinMTR on Windows) mixes the two ideas:
Like ping, it sends repeated probes and measures round-trip time (RTT).
Like traceroute, it walks hop by hop through the path.
The result is a live view of the route and performance over time.
For anyone doing network troubleshooting, cloud hosting, or running latency‑sensitive apps, MTR is one of those tools you want in your pocket all the time.
Here’s the simple version of what MTR does every second or so:
It sends packets toward a target IP.
It starts with a Time To Live (TTL) of 1.
Every router that forwards the packet reduces the TTL by 1.
When TTL hits 0, that router drops the packet and may send back ICMP Time Exceeded.
Then MTR increments the TTL:
TTL 1 → reaches hop 1.
TTL 2 → reaches hop 2.
TTL 3 → reaches hop 3.
…and so on, until it hits the destination.
While doing this, MTR records for each hop:
Packet loss percentage.
Minimum, average, and maximum RTT.
Jitter (how much the delay jumps around).
This lets you see where along the path things start to get worse.
The nice part: you don’t just get one snapshot. You see behavior over many samples, which is much closer to how real traffic behaves.
You don’t need anything fancy to start using MTR. Most Unix‑like systems have it in the package manager.
bash
sudo yum update
sudo yum install mtr
bash
sudo apt update && sudo apt upgrade
sudo apt install mtr
bash
brew install mtr
On Windows, you can use WinMTR. Install it from its official distribution and run it as a graphical tool.
If you manage servers for cloud hosting or dedicated infrastructure, it’s worth installing MTR on at least:
One machine in each major region or data center.
One troubleshooting box in your on‑premises network.
That way, when someone says “it’s slow from Europe but fine from the US,” you already have a place to test from.
Sometimes the quickest way to debug “is it my provider or is it me?” is to test from a different hosting platform entirely. Spinning up a short‑lived test server in another network can give you a clean comparison.
👉 Launch a low-latency test server with GTHost and compare network paths in minutes.
Once you can see MTR from two different providers side by side, it gets much easier to decide where the real problem lives.
On Linux and macOS, MTR has a lot of useful options. The big decision is usually: ICMP or TCP?
This works like a classic ping/traceroute, using ICMP packets:
bash
sudo mtr -c 100 example-ip --report
-c 100 → send 100 probes (samples).
--report → run silently, then print a summary report when done.
Sometimes your target doesn’t respond to ICMP at all (either by design or due to a firewall). In that case, you can run MTR using TCP:
bash
sudo mtr -T -c 100 example-ip -P port-number --report
-T → use TCP instead of ICMP.
-P port-number → the destination port, like 80, 443, or 22.
A few notes:
Replace example-ip with the IP or hostname you actually care about.
Replace port-number with the port your app uses (for web apps, usually 80 or 443).
If you ever forget an option, you can always ask MTR itself:
bash
mtr --help
man mtr
When MTR finishes in --report mode, you’ll see one line per hop.
Each line usually has columns like:
Host
Loss%
Sent
Last
Avg
Best
Wrst
StDev (or similar)
Different builds might name the columns slightly differently, but the idea is the same.
The main trap: not all “loss” and “latency” in the middle of the path is real.
You have to keep an eye on what happens at the final hop and on the overall pattern.
Let’s look at a common situation you’ll see a lot in real networks.
Imagine an MTR report where:
Hop 2 shows 20% packet loss.
Hop 4 shows 40% packet loss.
Hop 5 has an RTT spike, maybe 300 ms.
The final destination shows:
0% loss.
Low average RTT, maybe under 1 ms.
If the final hop is clean, the path is usually fine.
So what’s going on with those scary numbers in the middle?
A few reasons:
Some routers don’t bother replying to every ICMP Time Exceeded.
Some routers rate‑limit ICMP replies heavily.
Some devices prioritize real traffic and treat these probes as “background noise.”
So the router might drop measurement packets, but still forward real application traffic just fine.
How to read it:
If a hop shows loss, but a later hop shows 0% loss, that earlier “loss” is just the router not responding to probes.
Real data is still flowing through.
If you see a latency spike at a hop, but the RTT goes back down on the next hops, it’s just the ICMP Time Exceeded reply being slow.
Again, real data is probably fine.
Rule of thumb: the final hop (or last responding hop) tells you the truth about end‑to‑end health.
Now imagine a different MTR report:
Hop 10 starts showing 50% loss.
Hops 11, 12, 13, and the destination all show about the same 50% loss.
The final hop shows very high loss and higher RTT than earlier hops.
This pattern usually means real packet loss.
What to look for:
A hop starts showing loss.
All following hops (and the destination) show the same or higher loss.
RTT often stays high or gets worse after that point.
Practical tips:
Focus on the last responding hop with high loss. That’s usually where the problem becomes visible.
If loss appears and then disappears further down the path, it’s almost always just probe handling, not real loss.
Sometimes the destination is just ignoring your probes:
MTR shows 100% loss at the last hop.
Yet your application to that same IP and port works fine.
This can happen when:
The server blocks ICMP completely.
The service listens only on specific ports or protocols.
Firewalls or security groups are very strict.
How to handle this:
If all previous hops look healthy, but the last hop is “dead,” suspect configuration instead of network failure.
Try TCP-based MTR (-T -P) on the actual app port.
Or use a tool like hping3 to probe the same TCP port end-to-end.
In larger networks and cloud environments (including AWS and other cloud hosting providers), traffic might take multiple equal‑cost paths to reach the same destination. This is called Equal Cost Multi‑Path routing (ECMP).
Rough idea:
The router hashes packet fields (source/destination IP, port, protocol).
Packets with the same hash follow the same path.
TCP/UDP flows often get spread across multiple paths.
ICMP doesn’t use ports, so it might stick to a single path.
What this means for MTR:
With TCP/UDP MTR, each probe might use a different source port.
That can push different probes onto different ECMP paths.
If some paths have more hops and some routers don’t send ICMP Time Exceeded, MTR can misinterpret that as packet loss.
Imagine two paths to the same destination:
Short path: 3 hops, all reply nicely.
Long path: 7 hops, and some routers in the middle don’t respond with Time Exceeded.
If MTR discovers it can reach the destination with TTL 3 on the short path, it might stop probing with TTL 7.
So any packet that takes the longer path expires early and shows as “loss” in the report, even though your real application traffic repeats full end‑to‑end connections just fine.
What to do:
If MTR reports big loss but your app still feels healthy, be suspicious.
Use a tool that doesn’t mess with TTL, like hping3, to test the actual app port:
bash
sudo hping3 -S -p 22 10.0.100.1
Compare results. If hping3 shows clean connectivity, you might have an ECMP + MTR interpretation issue, not real packet loss.
MTR is powerful, but it manipulates TTL, which can create odd edge cases. It’s good practice to validate what you see.
If you run ICMP MTR:
Also run plain ping or hping3 in ICMP mode toward the same target.
Check long‑run packet loss and average RTT.
If you run TCP MTR (with -T):
Use hping3 on the same TCP port.
Confirm whether end‑to‑end loss is real or just an artifact of how MTR probes.
This combo gives you:
Hop‑by‑hop visibility (MTR).
Clean end‑to‑end checking without TTL tricks (Ping/Hping).
When you run infrastructure for latency‑sensitive workloads on cloud hosting or dedicated servers, this extra validation is usually worth the extra minute. It can prevent you from blaming the wrong part of the path or opening tickets with the wrong provider.
One more thing that bites a lot of people: asymmetric routing.
Your traffic might go one way, but the reply takes a completely different path. For example:
Outbound: from your on‑prem network to a cloud service through ISP A.
Inbound: replies from the cloud to you through ISP B, maybe even via another region.
If you only run MTR from your side, you’re only seeing half the story.
Good practice:
Run MTR from source → destination.
Also run MTR from destination → source (if you control both ends or have access to both).
If performance looks bad only in one direction, you probably have:
Congestion, filtering, or shaping on one of the ISPs.
A mis‑routed or sub‑optimal path in only one direction.
Policy-based routing or ECMP differences.
In multi‑cloud or hybrid setups, spinning up a small VM or bare‑metal server in another provider’s network is a great way to collect these “reverse” traces and compare.
When you have an MTR report in front of you, walk through these steps:
Look at the final hop first.
If loss is 0% and RTT looks fine, you probably don’t have a real network problem.
Check where loss first appears and whether it continues.
If loss appears at hop N but goes back to 0% later, it’s likely just that router not replying.
Watch for RTT trends.
A spike that doesn’t persist is usually harmless.
A consistently high RTT from some hop onward might be real latency.
Consider the type of probe.
ICMP blocked or throttled? Try TCP-based MTR.
App uses a specific port? Test that exact port.
Validate with Ping/Hping.
Especially when results seem surprising or conflict with user experience.
Check both directions when possible.
Asymmetric routing is common in large networks and cloud hosting platforms.
Q: Is a bit of packet loss in the middle of the path always bad?
A: No. If later hops, especially the final one, show 0% loss, that “loss” in the middle is usually just routers dropping measurement packets.
Q: How many packets should I send with -c?
A: For quick checks, 50–100 is fine. For more serious troubleshooting (like intermittent issues in a production hosting environment), 500 or more can give a better picture.
Q: Should I always use TCP MTR instead of ICMP?
A: Not always. ICMP is simple and often enough. Use TCP when:
ICMP is blocked or heavily throttled.
You need to see behavior on the exact port your app uses.
Q: How does this help with cloud hosting and providers like AWS or GTHost?
A: MTR helps you separate:
Problems in your local network.
Problems in your ISP’s core.
Problems near or inside the cloud provider’s network.
That way you know whether to fix something yourself or open a ticket with the right provider.
MTR gives you a clear, repeatable way to see where packet loss and latency really start, instead of guessing or arguing with vague “the network is slow” complaints. By combining MTR with Ping/Hping and testing in both directions, you can tell whether you’re looking at real network issues or just noisy router behavior.
If you want a fast, realistic way to validate MTR results from different locations or compare providers, 👉 GTHost is suitable for latency‑sensitive troubleshooting scenarios because you can spin up servers quickly in many locations and see how real traffic behaves. This makes it much easier to confirm what your MTR output is telling you and keep your applications running smoothly.