Network Emulation with Linux Netem

Back in the day, I used an open source program called NistNET to emulate a WAN for my company’s network test lab on a linux machine.  I was able to solve a multitude of issues and test our applications in a WAN environment with this product.  Unfortunately, NistNET is no longer maintained, and until recently I had no open source tool for emulating a network in my arsenal.  The other day, while playing Call of Duty 2 with some friends on my dedicated linux server, I decided that I was tired of having an unfair advantage since my latency to the server was 1 ms, and they all had 50-70 ms or more, so I went on a search for something I could use to add delay to my connection to the server (one of my buddies says I am too honest).  After some searching, I came upon netem, which to my surprise is and has been part of the linux kernel for some time.  I know, some of you linux guys and gals out there are saying “tell me something I don’t know”  but, ashamedly, I didn’t know about this one perhaps because I don’t do much of that kind of work any more.

Anyway, using Netem, I could do exactly what I wanted.  I can add enough delay to my client to game server traffic (actually, its server to client, which I will explain later) to make it seem like I have a lot more network between the server and me than I actually do.  Whether or not this decreases my advantage in the game is yet to be seen.

Anyway, several “effects” are present on most wide area networks today.  A common effect, latency, can have a drastic effect on the way network communications protocols behave.  Latency is also one of the key issues with playing on line games, especially those that require fast reaction to on screen events.  High latency creates what gamers refer to as lag.

The Netem function of linux provides the capability to modify the parameters of egress traffic (ie, traffic exiting the machine and destined for another point on on the network).  With netem, its possible to create artificial delay, thus creating latency.  Other possibilities are rate limiting (controlling the amount of bandwidth traffic can use), causing packet loss, and jitter.  Packet loss can result in very poor performance with TCP applications.  Jitter, also known as variable delay, is bad for real time streaming applications such as voice over IP.

Anyway, you could probably see why this kind of stuff would be important to a network engineer, especially in a lab environment.

So – on to how to use netem.  Netem is controlled by the tc command, which is part of the iproute2 package and is included in with most linux distributions.

Using the tc command, we can easily tell a linux host to delay all packets exiting a network interface using this command:

tc qdisc add dev eth0 root netem delay 80ms

This will add 80ms of delay to all packets leaving the eth0 interface.  To test the result of this command, just do a ping from you machine before issuing the command, and then after:

ping -n 192.168.0.15


PING 192.168.0.15 (192.168.0.15) 56(84) bytes of data.
64 bytes from 192.168.0.15: icmp_seq=1 ttl=64 time=0.103 ms
64 bytes from 192.168.0.15: icmp_seq=2 ttl=64 time=0.394 ms
64 bytes from 192.168.0.15: icmp_seq=3 ttl=64 time=0.070 ms
64 bytes from 192.168.0.15: icmp_seq=4 ttl=64 time=0.101 ms
^C
--- 192.168.0.15 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2998ms
rtt min/avg/max/mdev = 0.070/0.167/0.394/0.131 ms

Enter the tc command for adding delay to eth0:

tc qdisc add dev eth0 root netem delay 80ms

Then ping again:

ping -n 192.168.0.15
PING 192.168.0.15 (192.168.0.15) 56(84) bytes of data.
64 bytes from 192.168.0.15: icmp_seq=1 ttl=64 time=80.0 ms
64 bytes from 192.168.0.15: icmp_seq=2 ttl=64 time=80.0 ms
64 bytes from 192.168.0.15: icmp_seq=3 ttl=64 time=80.0 ms
64 bytes from 192.168.0.15: icmp_seq=4 ttl=64 time=80.4 ms
^C
--- 192.168.0.15 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 80.073/80.164/80.414/0.246 ms

Notice the difference in delay (~80ms).

We can also add variable delay (jitter) as most wide area networks (such as the internet) have some jitter associated with them.  The following command will add +/- 10ms of jitter to the 80ms delay shown in the last example:

tc qdisc add dev eth0 root netem delay 80ms 10ms

Now lets do the ping again:

PING 192.168.0.15 (192.168.0.15) 56(84) bytes of data.
64 bytes from 192.168.0.15: icmp_seq=1 ttl=64 time=72.6 ms
64 bytes from 192.168.0.15: icmp_seq=2 ttl=64 time=84.6 ms
64 bytes from 192.168.0.15: icmp_seq=3 ttl=64 time=86.7 ms
64 bytes from 192.168.0.15: icmp_seq=4 ttl=64 time=84.0 ms
^C
--- 192.168.0.15 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 72.648/82.023/86.752/5.510 ms

Looks even more like a real internet connection now.

To see what qdisc (short for queuing discipline) parameters have been applied to an interface (in this case eth0) use the following command:

tc qdisc show dev eth0

Sample output follows:

qdisc netem 8003: root limit 1000 delay 80.0ms  10.0ms

The last part of the output shows that a delay of 80ms +/- 10ms is applied.

Now, for the important part – how do you turn this off? It took a while to find this in the netem documentation:

tc qdisc del dev eth0 root

This will remove all queuing discipline parameters from the eth0 interface on your system.

So this is great, but not necessarily what I am looking for.  Adding delay wholesale to the server would also increase my fellow gamers latency as well as mine, and the idea is to level the playing field.

That is ok, since netem/tc has a way to only place qdisc’s on specific traffic.  In my test network, I have two machines.  One running Windows 7 (this case the Call of Duty Client) and one running Ubuntu 9.10 (the COD2 server).  The Windows machine has an IP of 192.168.0.15, and the server 192.168.0.14.

On the linux server, I run the following commands as root:

tc qdisc add dev eth0 root handle 1: prio
tc qdisc add dev eth0 parent 1:3 handle 30: netem delay 140ms 10ms distribution normal
tc filter add dev eth0 protocol ip parent 1:0 prio 3 u32 match ip dst 192.168.0.15/32 flowid 1:3

This set of commands creates a simple priority queuing discipline, attaches a basic latency netem at hook 3, and then tells all traffic to 192.168.0.15 to be priority 3, thus subject to the netem delay of 140ms +/- 10ms (with a normal statistical distribution of jitter).

These commands do exactly what I was wanting – making my delay to the game server about equal to my friends.

So far, it seems to work, however it’s not optimal.  The reason its not optimal is because only the packets coming from the server to my Windows client machine are being delayed.  A true internet connection would have delay in both directions.  Since netem only affects the egress of traffic from a network interface, technically you would have to delay the traffic as it leaves the client PC, and delay the traffic as it leaves the server back towards the client.   Since Windows doesn’t have a netem facility (at least not without some expensive commercial software such as that from Shunra) the best way to do this would be to run Call of Duty 2 on Linux using wine (which is another article for a another time).  That way I could induce delay on both machines, and get a “more perfect”  simulation of the internet.

To show existing filters such as those set by the last set of commands you can use the following commands:

tc filter show dev eth0

and

tc qdisc show dev eth0

Here is an example output:

tc filter show dev eth0


filter parent 1: protocol ip pref 3 u32
filter parent 1: protocol ip pref 3 u32 fh 800: ht divisor 1
filter parent 1: protocol ip pref 3 u32 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:3
match c0a8000f/ffffffff at 16


tc qdisc show dev eth0


qdisc prio 1: root bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc netem 30: parent 1:3 limit 1000 delay 140.0ms  10.0ms

In summary, netem is a perfect example of what I love most about Linux – flexibility and utility beyond what most commercial Operating Systems (especially those from Redmond) offer out of the box for free. While the way I have implemented it is not quite perfect, netem can provide what I am looking for in terms of simulating network conditions. As I mentioned above, perfection could be achieved by using linux as the client as well as the server, or by making a bridge between the two machines with a dual interface linux machine doing netem delay on both interfaces.

Netem has several capabilities that I didn’t cover, which can be found at the following links:

The Linux Foundation has an overview page here.
Here is pdf file showing more details on netem and tc usage.

Next up – Client IPSEC VPNs from a linux laptop to a Juniper Netscreen VPN/FIrewall device. Coming Soon!

4 comments on Network Emulation with Linux Netem

Leave a Reply

Your email address will not be published. Required fields are marked *