LinuxPowerUser.com

Tag: network

Network Emulation with Linux Netem

by on Jan.31, 2010, under Games, Networking

Back in the day, I used an open source program called NistNET to emulate a WAN for my company’s network test lab on a linux machine.  I was able to solve a multitude of issues and test our applications in a WAN environment with this product.  Unfortunately, NistNET is no longer maintained, and until recently I had no open source tool for emulating a network in my arsenal.  The other day, while playing Call of Duty 2 with some friends on my dedicated linux server, I decided that I was tired of having an unfair advantage since my latency to the server was 1 ms, and they all had 50-70 ms or more, so I went on a search for something I could use to add delay to my connection to the server (one of my buddies says I am too honest).  After some searching, I came upon netem, which to my surprise is and has been part of the linux kernel for some time.  I know, some of you linux guys and gals out there are saying “tell me something I don’t know”  but, ashamedly, I didn’t know about this one perhaps because I don’t do much of that kind of work any more.

Anyway, using Netem, I could do exactly what I wanted.  I can add enough delay to my client to game server traffic (actually, its server to client, which I will explain later) to make it seem like I have a lot more network between the server and me than I actually do.  Whether or not this decreases my advantage in the game is yet to be seen.

Anyway, several “effects” are present on most wide area networks today.  A common effect, latency, can have a drastic effect on the way network communications protocols behave.  Latency is also one of the key issues with playing on line games, especially those that require fast reaction to on screen events.  High latency creates what gamers refer to as lag.

The Netem function of linux provides the capability to modify the parameters of egress traffic (ie, traffic exiting the machine and destined for another point on on the network).  With netem, its possible to create artificial delay, thus creating latency.  Other possibilities are rate limiting (controlling the amount of bandwidth traffic can use), causing packet loss, and jitter.  Packet loss can result in very poor performance with TCP applications.  Jitter, also known as variable delay, is bad for real time streaming applications such as voice over IP.

Anyway, you could probably see why this kind of stuff would be important to a network engineer, especially in a lab environment.

So – on to how to use netem.  Netem is controlled by the tc command, which is part of the iproute2 package and is included in with most linux distributions.

Using the tc command, we can easily tell a linux host to delay all packets exiting a network interface using this command:

tc qdisc add dev eth0 root netem delay 80ms

This will add 80ms of delay to all packets leaving the eth0 interface.  To test the result of this command, just do a ping from you machine before issuing the command, and then after:

ping -n 192.168.0.15


PING 192.168.0.15 (192.168.0.15) 56(84) bytes of data.
64 bytes from 192.168.0.15: icmp_seq=1 ttl=64 time=0.103 ms
64 bytes from 192.168.0.15: icmp_seq=2 ttl=64 time=0.394 ms
64 bytes from 192.168.0.15: icmp_seq=3 ttl=64 time=0.070 ms
64 bytes from 192.168.0.15: icmp_seq=4 ttl=64 time=0.101 ms
^C
--- 192.168.0.15 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2998ms
rtt min/avg/max/mdev = 0.070/0.167/0.394/0.131 ms

Enter the tc command for adding delay to eth0:

tc qdisc add dev eth0 root netem delay 80ms

Then ping again:

ping -n 192.168.0.15
PING 192.168.0.15 (192.168.0.15) 56(84) bytes of data.
64 bytes from 192.168.0.15: icmp_seq=1 ttl=64 time=80.0 ms
64 bytes from 192.168.0.15: icmp_seq=2 ttl=64 time=80.0 ms
64 bytes from 192.168.0.15: icmp_seq=3 ttl=64 time=80.0 ms
64 bytes from 192.168.0.15: icmp_seq=4 ttl=64 time=80.4 ms
^C
--- 192.168.0.15 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 80.073/80.164/80.414/0.246 ms

Notice the difference in delay (~80ms).

We can also add variable delay (jitter) as most wide area networks (such as the internet) have some jitter associated with them.  The following command will add +/- 10ms of jitter to the 80ms delay shown in the last example:

tc qdisc add dev eth0 root netem delay 80ms 10ms

Now lets do the ping again:

PING 192.168.0.15 (192.168.0.15) 56(84) bytes of data.
64 bytes from 192.168.0.15: icmp_seq=1 ttl=64 time=72.6 ms
64 bytes from 192.168.0.15: icmp_seq=2 ttl=64 time=84.6 ms
64 bytes from 192.168.0.15: icmp_seq=3 ttl=64 time=86.7 ms
64 bytes from 192.168.0.15: icmp_seq=4 ttl=64 time=84.0 ms
^C
--- 192.168.0.15 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 72.648/82.023/86.752/5.510 ms

Looks even more like a real internet connection now.

To see what qdisc (short for queuing discipline) parameters have been applied to an interface (in this case eth0) use the following command:

tc qdisc show dev eth0

Sample output follows:

qdisc netem 8003: root limit 1000 delay 80.0ms  10.0ms

The last part of the output shows that a delay of 80ms +/- 10ms is applied.

Now, for the important part – how do you turn this off? It took a while to find this in the netem documentation:

tc qdisc del dev eth0 root

This will remove all queuing discipline parameters from the eth0 interface on your system.

So this is great, but not necessarily what I am looking for.  Adding delay wholesale to the server would also increase my fellow gamers latency as well as mine, and the idea is to level the playing field.

That is ok, since netem/tc has a way to only place qdisc’s on specific traffic.  In my test network, I have two machines.  One running Windows 7 (this case the Call of Duty Client) and one running Ubuntu 9.10 (the COD2 server).  The Windows machine has an IP of 192.168.0.15, and the server 192.168.0.14.

On the linux server, I run the following commands as root:

tc qdisc add dev eth0 root handle 1: prio
tc qdisc add dev eth0 parent 1:3 handle 30: netem delay 140ms 10ms distribution normal
tc filter add dev eth0 protocol ip parent 1:0 prio 3 u32 match ip dst 192.168.0.15/32 flowid 1:3

This set of commands creates a simple priority queuing discipline, attaches a basic latency netem at hook 3, and then tells all traffic to 192.168.0.15 to be priority 3, thus subject to the netem delay of 140ms +/- 10ms (with a normal statistical distribution of jitter).

These commands do exactly what I was wanting – making my delay to the game server about equal to my friends.

So far, it seems to work, however it’s not optimal.  The reason its not optimal is because only the packets coming from the server to my Windows client machine are being delayed.  A true internet connection would have delay in both directions.  Since netem only affects the egress of traffic from a network interface, technically you would have to delay the traffic as it leaves the client PC, and delay the traffic as it leaves the server back towards the client.   Since Windows doesn’t have a netem facility (at least not without some expensive commercial software such as that from Shunra) the best way to do this would be to run Call of Duty 2 on Linux using wine (which is another article for a another time).  That way I could induce delay on both machines, and get a “more perfect”  simulation of the internet.

To show existing filters such as those set by the last set of commands you can use the following commands:

tc filter show dev eth0

and

tc qdisc show dev eth0

Here is an example output:

tc filter show dev eth0


filter parent 1: protocol ip pref 3 u32
filter parent 1: protocol ip pref 3 u32 fh 800: ht divisor 1
filter parent 1: protocol ip pref 3 u32 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:3
match c0a8000f/ffffffff at 16


tc qdisc show dev eth0


qdisc prio 1: root bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc netem 30: parent 1:3 limit 1000 delay 140.0ms  10.0ms

In summary, netem is a perfect example of what I love most about Linux – flexibility and utility beyond what most commercial Operating Systems (especially those from Redmond) offer out of the box for free. While the way I have implemented it is not quite perfect, netem can provide what I am looking for in terms of simulating network conditions. As I mentioned above, perfection could be achieved by using linux as the client as well as the server, or by making a bridge between the two machines with a dual interface linux machine doing netem delay on both interfaces.

Netem has several capabilities that I didn’t cover, which can be found at the following links:

The Linux Foundation has an overview page here.
Here is pdf file showing more details on netem and tc usage.

Next up – Client IPSEC VPNs from a linux laptop to a Juniper Netscreen VPN/FIrewall device. Coming Soon!

4 Comments :, , , , more...

SSH Tunnelling (aka Poor Man’s VPN)

by on Jan.29, 2010, under Networking, Server Administration

Tunnelling of TCP traffic can be performed from the ssh command on Linux or with Putty on Windows, and can be thought of as a poor man’s VPN.  A VPN is a virtual private network, or a method of using a public network such as the internet to  securely transmit data via an encrypted “tunnel”.

VNC is a method of gaining access to a remote GUI on Linux and Windows machines.  VNC is typically considered insecure and not recommended for use on the open internet.  With an SSH tunnel, this doesn’t have to be an issue, as ssh provides security to an otherwise insecure protocol.

Here is an example of how to use VNC over an SSH tunnel:

Start VNC server on a Linux host ssh server to only listen to Loopback interface:

vncserver :1 -localhost

On client machine, start ssh with the following command line:

ssh -L 5901:localhost:5901 <server ip> [-l <login>]

The to access VNC via the SSH tunnel, use the following command on the client machine:

vncviewer localhost:1

What happens?

The ssh process on the client sets up a TCP port redirection on the loopback interface of port 5901, to the loopback interface on the server machine on TCP port 5901.  The vncviewer command connects to the the redirected port on the local loopback interface, which then gets directed over the tunnel to the server machine’s loopback on port 5901, where the vncserver is listening.

This will allow vnc protocol to be securely tunnelled across the SSH connection.

The previous examples showed connecting to services running on the ssh server itself.  Its also possible to use the SSH server to redirect traffic to other machines on the network behind it.

Sometimes we might have to access a Windows Machine behind an a linux SSH server that is connected to the internet (such as linux system performing firewalling for a home network). We can use SSH tunnelling to connect to Windows Remote Desktop as well. (I know this is a linux blog, but most of us out there still have to deal with Windows from time to time)

The following example assumes the following network layout:

Client PC –> Internet–>SSH Server on Firewall–>Private Network–> Windows XP
(188.18.199.11)                                                                                                        (192.168.0.11)
(ssh server can be behind firewall as long as its accessible from Internet)

1. Make sure Windows XP host is running RDP

2. On client PC, start SSH with tunnelling as follows: 4000:192.168.0.11:3389

On Putty this tunnel definition looks like this (click add after completing the boxes):

Which is exactly like the ssh command on Linux:

ssh -L 4000:192.168.0.11:3389 <server> -l <userid>

To connect to the RDP service on the internal Windows XP system, from a client Windows system connected to the internet via the ssh connection:

Use Remote Desktop Connection application that comes with XP, but use this as the address to connect to:

Like the previous example, this causes the program to connect to port # 4000 on the local loopback interface, which then is redirected to the 192.168.0.11 machine on port number 3389 at the other end of the ssh tunnel.  Port # 4000 is used to avoid conflicting with port #3389 on the client as it could have its own RDP server running.

Any TCP based communications can be tunnelled this way over ssh, creating a secure connection for any unsecure protocol. This is also a mechanism for bypassing firewall rules.  As long as the SSH server traffic is allowed (TCP port 22).  It can be used to gain access to other ports that might not be allowed by a local firewall, simply by using a remote ssh server as a proxy for other traffic.

1 Comment :, , , , , more...

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

Blogroll

A few highly recommended websites...