NVIDIA 256.53 Drivers and older Games

Its time for a quick howto – for some time I have been sticking with older NVIDIA driver releases on my Ubuntu machines do to a buffer overrun error on some older games when using driver sets newer than 185.18.36. The interesting thing is that equivalent Windows drivers exhibited the same problem, except that eventually the issue was fixed on Windows. After doing some searching I came across the answer to fix this issue – thanks to Conky on nvnews.net.

The problem stems from the newer drivers reporting a GL Extension Version String that is too long for some older programs to process, thus resulting in the buffer overrun. To fix this, NVIDIA included an environment variable that can be set to report older (and thus shorter) version strings.

The game I was having the most trouble with was Call of Duty 1. I know, why would I want to play such an old version? I am a COD addict and still play COD1 all the way up to the current COD:MW2. Under linux, COD1 must be run using Wine. Normally cd’ing to the Call of Duty Directory and running the following command will start the game:

wine CoDSP.exe

However, using NVIDIA 256.53 (or anything beyond the 185.18.36 driver version) will result in the buffer overrun error. To fix this issue, change your startup command to the following:

__GL_ExtensionStringVersion=17700 wine CoDSP.exe

The __GL_ExtensionStringVersion is an environment variable that in this case is set to 17700. What this is doing it telling the NVIDIA driver to report the GL_ExtensionString as if the driver was from the 177.00 series. This effectively eliminates the overrun error and allows the game to start.

This issue, is mentioned in the NVIDIA driver readme file (here is the snippet), as was kindly pointed out by Conky over at nvnews.net:

Some applications, such as Quake 3, crash after querying the OpenGL extension string

Some applications have bugs that are triggered when the extension string is longer than a certain size. As more features are added to the driver, the length of this string increases and can trigger these sorts of bugs.

You can limit the extensions listed in the OpenGL extension string to the ones that appeared in a particular version of the driver by setting the __GL_ExtensionStringVersion environment variable to a particular version number. For example,

__GL_ExtensionStringVersion=17700 quake3

will run Quake 3 with the extension string that appeared in the 177.* driver series. Limiting the size of the extension string can work around this sort of application bug.
—End Quote—

So, there you have it. I learned that you’re never “too good” read the readme files.

Asus G72GX Laptop Review

For the past year or so, I have been looking for a good laptop for my mobile pursuits.  I have some pretty stringent requirements for my mobile platform,  the most import of which is the ability to run 3D games.  With linux as my primary OS, and many of the games I play being available for Linux (or can be coaxed to run on linux with wine) this pretty much means that Nvidia discrete graphics are a must.  I spent many months looking at systems like the M17x from Alienware, or a DIY AVADirect Clevo unit among others.  The main issue with these rigs comes down to one thing: cost.  A fully loaded M17x can cost just as much as  a high end desktop rig.  So, after some shopping around I had come to the conclusion that I would have to finance one of these monsters if I wanted a good gaming laptop.  A few weeks ago, I was in Best Buy and I did something that I never do – look at the budget laptops that they typically carry.  I came upon an ASUS G72GX system.  The specs were actually pretty impressive:

CPU: 2.53 Ghz Core 2 Duo

Video:  Nvidia 260M 1GB Discrete Graphics


Hard disk: 500GB, 5400RPM

1600×900 Widescreen LCD Screen

Webcam, USB, E-SATA, 1394, card reader, Secondary hard disk bay, DVD-R/W drive, G/N Wifi, Gig Ethernet LAN, illuminated keyboard

The most amazing thing:  a $999 price tag.  So, I thought about it, did some quick research the next day, and decided to give it a shot.  I have had some mostly positive experience with Asus motherboards in the past, but hadn’t spent much time on anything else from the company.

In short, I am glad I did.  For a modest amount of money I got an excellent performing machine that seems to be able to grind through just about anything I have given it.  Since I didn’t find many online resources for running Linux on this platform, I figured I would write a quick review on the machine and the caveats with running linux on it.

Hardware Compatibility:

I chose the latest version of Ubuntu for the install, 9.10 Karmic Koala.  Now, overall Karmic is a good version of Ubuntu, however it doeshave some issues (we will save that for another article).

The install went pretty much flawlessly, all hardware was detected and the system came up the first time in a usable state.  Typical Ubuntu up to this point.  I quickly noticed an issue with the Wireless adapter in the system.  It is an Atheros 928X adapter, and it turns out that this chipset can be problematic at times on Linux.  Basically the card would work for  about 5-10 minutes, but then it would drop off of the network and  basically become unusable.  Only a reboot could correct the situation.  After some research, it appears that better support for the adapter is available in a karmic kernal backports package.  A simple package installation with the command:

sudo apt-get install linux-backports-modules-karmic

Followed by a reboot was enough to get the adapter usable.  While this fixed the network drop/reboot issue, it was still not perfect.  As the machine was used, you could “feel” times when the network connectivity would drop for a few seconds on a regular basis.  This was especially evident when playing World of Warcraft or other online games.  Thankfully, the 2.6.31-20 kernel update and the associated backport package that came out about a week later seems to have resolved all of the wireless issues.

The next issue was with the Nvidia 260M graphics.  Ubuntu has a tendency to build a distribution with a specific set of Nvidia closed source drivers, and typically does not update those drivers throughout the support life of the distribution version.  I, on the other hand prefer to install the latest Nvidia drivers by hand.  Unfortunately the latest Nvidia drivers package was not able to recognize the PCI ID of the 260M graphics card in the machine.  This is an interesting issue that I do not yet have a resolution for.  I ended up installing the Ubuntu supplied Nvidia 185.18.36 package and it was able to detect the card.  Luckily, the 185.18.36 driver set is a stable and high performing release (unlike some previous drivers packaged with Hardy or Intrepid).

The last hardware related issue I came across was sound card static.  It seemed that playing games such as Quake 4 or World of Warcraft the sound quality suffered from a lot of static.  This was fixed by modifying the /etc/modprode.d/alsa-base.conf file.  Apparently by default a sound card power management feature is turned on for Intel HDA sound cards.  Look for the following lines in your /etc/modprobe.d/alsa-base.conf file:

# Power down HDA controllers after 10 idle seconds
options snd-hda-intel power_save=10 power_save_controller=N

Simply commenting out the second line and rebooting the system fixed the issue.

That about covers the hardware issues.  For the most part, nothing major.

Usability and Performance:

Overall the machine is comfortable to use and works very well.  I can achieve very playable from rates on several games even recent titles such as FEAR 2, Call of Duty Modern Warfare 2 on Windows and several old standbys on Linux such as Quake 4, Enemy Territory and Doom 3 all run great even running at a full 1600×900 with 4x AA and some AF.

The only complaints I have are regarding the touch pad and the gloss plastic surface that makes up the keyboard.  The touch pad is quite large and can interfere with typing since your palms will cause the touchpad to click or move the mouse.  Turning off double click capabilities on the touchpad on linux did the trick.  The problem with the glossy plastic coating on the keyboard is that is a finger/palm print magnet, and is hard to clean.

The LCD screen is quite bright and crisp, and has excellent picture quality.  I was worried that 1600×900 (16×9 aspect) was going to be a little narrow for my tastes – I prefer 1920×1200 or 16×10 aspect ratio monitors, but so far this has not been an issue and I am very pleased with the screen real estate and quality.


I would like to run some benchmarks on the machine with the Phoronix Testing suite, but that will have to come at a later date.  Overall I can’t think of a better deal for the money in a gaming capable laptop/portable workstation.  While the gloss finish a touchpad are little annoying, they don’t detract from the overall quality and performance of the machine enough for me not to recommend it.  I give it a 9/10 grade.  Asus did a great job with this machine and I highly recommend it.


The nvidia 256.53 driver set installs and detects the 260M video card in this machine just fine.

Network Emulation with Linux Netem

Back in the day, I used an open source program called NistNET to emulate a WAN for my company’s network test lab on a linux machine.  I was able to solve a multitude of issues and test our applications in a WAN environment with this product.  Unfortunately, NistNET is no longer maintained, and until recently I had no open source tool for emulating a network in my arsenal.  The other day, while playing Call of Duty 2 with some friends on my dedicated linux server, I decided that I was tired of having an unfair advantage since my latency to the server was 1 ms, and they all had 50-70 ms or more, so I went on a search for something I could use to add delay to my connection to the server (one of my buddies says I am too honest).  After some searching, I came upon netem, which to my surprise is and has been part of the linux kernel for some time.  I know, some of you linux guys and gals out there are saying “tell me something I don’t know”  but, ashamedly, I didn’t know about this one perhaps because I don’t do much of that kind of work any more.

Anyway, using Netem, I could do exactly what I wanted.  I can add enough delay to my client to game server traffic (actually, its server to client, which I will explain later) to make it seem like I have a lot more network between the server and me than I actually do.  Whether or not this decreases my advantage in the game is yet to be seen.

Anyway, several “effects” are present on most wide area networks today.  A common effect, latency, can have a drastic effect on the way network communications protocols behave.  Latency is also one of the key issues with playing on line games, especially those that require fast reaction to on screen events.  High latency creates what gamers refer to as lag.

The Netem function of linux provides the capability to modify the parameters of egress traffic (ie, traffic exiting the machine and destined for another point on on the network).  With netem, its possible to create artificial delay, thus creating latency.  Other possibilities are rate limiting (controlling the amount of bandwidth traffic can use), causing packet loss, and jitter.  Packet loss can result in very poor performance with TCP applications.  Jitter, also known as variable delay, is bad for real time streaming applications such as voice over IP.

Anyway, you could probably see why this kind of stuff would be important to a network engineer, especially in a lab environment.

So – on to how to use netem.  Netem is controlled by the tc command, which is part of the iproute2 package and is included in with most linux distributions.

Using the tc command, we can easily tell a linux host to delay all packets exiting a network interface using this command:

tc qdisc add dev eth0 root netem delay 80ms

This will add 80ms of delay to all packets leaving the eth0 interface.  To test the result of this command, just do a ping from you machine before issuing the command, and then after:

ping -n

PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=0.103 ms
64 bytes from icmp_seq=2 ttl=64 time=0.394 ms
64 bytes from icmp_seq=3 ttl=64 time=0.070 ms
64 bytes from icmp_seq=4 ttl=64 time=0.101 ms
--- ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2998ms
rtt min/avg/max/mdev = 0.070/0.167/0.394/0.131 ms

Enter the tc command for adding delay to eth0:

tc qdisc add dev eth0 root netem delay 80ms

Then ping again:

ping -n
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=80.0 ms
64 bytes from icmp_seq=2 ttl=64 time=80.0 ms
64 bytes from icmp_seq=3 ttl=64 time=80.0 ms
64 bytes from icmp_seq=4 ttl=64 time=80.4 ms
--- ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 80.073/80.164/80.414/0.246 ms

Notice the difference in delay (~80ms).

We can also add variable delay (jitter) as most wide area networks (such as the internet) have some jitter associated with them.  The following command will add +/- 10ms of jitter to the 80ms delay shown in the last example:

tc qdisc add dev eth0 root netem delay 80ms 10ms

Now lets do the ping again:

PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=72.6 ms
64 bytes from icmp_seq=2 ttl=64 time=84.6 ms
64 bytes from icmp_seq=3 ttl=64 time=86.7 ms
64 bytes from icmp_seq=4 ttl=64 time=84.0 ms
--- ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 72.648/82.023/86.752/5.510 ms

Looks even more like a real internet connection now.

To see what qdisc (short for queuing discipline) parameters have been applied to an interface (in this case eth0) use the following command:

tc qdisc show dev eth0

Sample output follows:

qdisc netem 8003: root limit 1000 delay 80.0ms  10.0ms

The last part of the output shows that a delay of 80ms +/- 10ms is applied.

Now, for the important part – how do you turn this off? It took a while to find this in the netem documentation:

tc qdisc del dev eth0 root

This will remove all queuing discipline parameters from the eth0 interface on your system.

So this is great, but not necessarily what I am looking for.  Adding delay wholesale to the server would also increase my fellow gamers latency as well as mine, and the idea is to level the playing field.

That is ok, since netem/tc has a way to only place qdisc’s on specific traffic.  In my test network, I have two machines.  One running Windows 7 (this case the Call of Duty Client) and one running Ubuntu 9.10 (the COD2 server).  The Windows machine has an IP of, and the server

On the linux server, I run the following commands as root:

tc qdisc add dev eth0 root handle 1: prio
tc qdisc add dev eth0 parent 1:3 handle 30: netem delay 140ms 10ms distribution normal
tc filter add dev eth0 protocol ip parent 1:0 prio 3 u32 match ip dst flowid 1:3

This set of commands creates a simple priority queuing discipline, attaches a basic latency netem at hook 3, and then tells all traffic to to be priority 3, thus subject to the netem delay of 140ms +/- 10ms (with a normal statistical distribution of jitter).

These commands do exactly what I was wanting – making my delay to the game server about equal to my friends.

So far, it seems to work, however it’s not optimal.  The reason its not optimal is because only the packets coming from the server to my Windows client machine are being delayed.  A true internet connection would have delay in both directions.  Since netem only affects the egress of traffic from a network interface, technically you would have to delay the traffic as it leaves the client PC, and delay the traffic as it leaves the server back towards the client.   Since Windows doesn’t have a netem facility (at least not without some expensive commercial software such as that from Shunra) the best way to do this would be to run Call of Duty 2 on Linux using wine (which is another article for a another time).  That way I could induce delay on both machines, and get a “more perfect”  simulation of the internet.

To show existing filters such as those set by the last set of commands you can use the following commands:

tc filter show dev eth0


tc qdisc show dev eth0

Here is an example output:

tc filter show dev eth0

filter parent 1: protocol ip pref 3 u32
filter parent 1: protocol ip pref 3 u32 fh 800: ht divisor 1
filter parent 1: protocol ip pref 3 u32 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:3
match c0a8000f/ffffffff at 16

tc qdisc show dev eth0

qdisc prio 1: root bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc netem 30: parent 1:3 limit 1000 delay 140.0ms  10.0ms

In summary, netem is a perfect example of what I love most about Linux – flexibility and utility beyond what most commercial Operating Systems (especially those from Redmond) offer out of the box for free. While the way I have implemented it is not quite perfect, netem can provide what I am looking for in terms of simulating network conditions. As I mentioned above, perfection could be achieved by using linux as the client as well as the server, or by making a bridge between the two machines with a dual interface linux machine doing netem delay on both interfaces.

Netem has several capabilities that I didn’t cover, which can be found at the following links:

The Linux Foundation has an overview page here.
Here is pdf file showing more details on netem and tc usage.

Next up – Client IPSEC VPNs from a linux laptop to a Juniper Netscreen VPN/FIrewall device. Coming Soon!