Asus G72GX Laptop Review

For the past year or so, I have been looking for a good laptop for my mobile pursuits.  I have some pretty stringent requirements for my mobile platform,  the most import of which is the ability to run 3D games.  With linux as my primary OS, and many of the games I play being available for Linux (or can be coaxed to run on linux with wine) this pretty much means that Nvidia discrete graphics are a must.  I spent many months looking at systems like the M17x from Alienware, or a DIY AVADirect Clevo unit among others.  The main issue with these rigs comes down to one thing: cost.  A fully loaded M17x can cost just as much as  a high end desktop rig.  So, after some shopping around I had come to the conclusion that I would have to finance one of these monsters if I wanted a good gaming laptop.  A few weeks ago, I was in Best Buy and I did something that I never do – look at the budget laptops that they typically carry.  I came upon an ASUS G72GX system.  The specs were actually pretty impressive:

CPU: 2.53 Ghz Core 2 Duo

Video:  Nvidia 260M 1GB Discrete Graphics


Hard disk: 500GB, 5400RPM

1600×900 Widescreen LCD Screen

Webcam, USB, E-SATA, 1394, card reader, Secondary hard disk bay, DVD-R/W drive, G/N Wifi, Gig Ethernet LAN, illuminated keyboard

The most amazing thing:  a $999 price tag.  So, I thought about it, did some quick research the next day, and decided to give it a shot.  I have had some mostly positive experience with Asus motherboards in the past, but hadn’t spent much time on anything else from the company.

In short, I am glad I did.  For a modest amount of money I got an excellent performing machine that seems to be able to grind through just about anything I have given it.  Since I didn’t find many online resources for running Linux on this platform, I figured I would write a quick review on the machine and the caveats with running linux on it.

Hardware Compatibility:

I chose the latest version of Ubuntu for the install, 9.10 Karmic Koala.  Now, overall Karmic is a good version of Ubuntu, however it doeshave some issues (we will save that for another article).

The install went pretty much flawlessly, all hardware was detected and the system came up the first time in a usable state.  Typical Ubuntu up to this point.  I quickly noticed an issue with the Wireless adapter in the system.  It is an Atheros 928X adapter, and it turns out that this chipset can be problematic at times on Linux.  Basically the card would work for  about 5-10 minutes, but then it would drop off of the network and  basically become unusable.  Only a reboot could correct the situation.  After some research, it appears that better support for the adapter is available in a karmic kernal backports package.  A simple package installation with the command:

sudo apt-get install linux-backports-modules-karmic

Followed by a reboot was enough to get the adapter usable.  While this fixed the network drop/reboot issue, it was still not perfect.  As the machine was used, you could “feel” times when the network connectivity would drop for a few seconds on a regular basis.  This was especially evident when playing World of Warcraft or other online games.  Thankfully, the 2.6.31-20 kernel update and the associated backport package that came out about a week later seems to have resolved all of the wireless issues.

The next issue was with the Nvidia 260M graphics.  Ubuntu has a tendency to build a distribution with a specific set of Nvidia closed source drivers, and typically does not update those drivers throughout the support life of the distribution version.  I, on the other hand prefer to install the latest Nvidia drivers by hand.  Unfortunately the latest Nvidia drivers package was not able to recognize the PCI ID of the 260M graphics card in the machine.  This is an interesting issue that I do not yet have a resolution for.  I ended up installing the Ubuntu supplied Nvidia 185.18.36 package and it was able to detect the card.  Luckily, the 185.18.36 driver set is a stable and high performing release (unlike some previous drivers packaged with Hardy or Intrepid).

The last hardware related issue I came across was sound card static.  It seemed that playing games such as Quake 4 or World of Warcraft the sound quality suffered from a lot of static.  This was fixed by modifying the /etc/modprode.d/alsa-base.conf file.  Apparently by default a sound card power management feature is turned on for Intel HDA sound cards.  Look for the following lines in your /etc/modprobe.d/alsa-base.conf file:

# Power down HDA controllers after 10 idle seconds
options snd-hda-intel power_save=10 power_save_controller=N

Simply commenting out the second line and rebooting the system fixed the issue.

That about covers the hardware issues.  For the most part, nothing major.

Usability and Performance:

Overall the machine is comfortable to use and works very well.  I can achieve very playable from rates on several games even recent titles such as FEAR 2, Call of Duty Modern Warfare 2 on Windows and several old standbys on Linux such as Quake 4, Enemy Territory and Doom 3 all run great even running at a full 1600×900 with 4x AA and some AF.

The only complaints I have are regarding the touch pad and the gloss plastic surface that makes up the keyboard.  The touch pad is quite large and can interfere with typing since your palms will cause the touchpad to click or move the mouse.  Turning off double click capabilities on the touchpad on linux did the trick.  The problem with the glossy plastic coating on the keyboard is that is a finger/palm print magnet, and is hard to clean.

The LCD screen is quite bright and crisp, and has excellent picture quality.  I was worried that 1600×900 (16×9 aspect) was going to be a little narrow for my tastes – I prefer 1920×1200 or 16×10 aspect ratio monitors, but so far this has not been an issue and I am very pleased with the screen real estate and quality.


I would like to run some benchmarks on the machine with the Phoronix Testing suite, but that will have to come at a later date.  Overall I can’t think of a better deal for the money in a gaming capable laptop/portable workstation.  While the gloss finish a touchpad are little annoying, they don’t detract from the overall quality and performance of the machine enough for me not to recommend it.  I give it a 9/10 grade.  Asus did a great job with this machine and I highly recommend it.


The nvidia 256.53 driver set installs and detects the 260M video card in this machine just fine.

Ubuntu Apache LAMP Server Quick Howto – Part 1 – Apache Basics

Linux web application servers typically use the Linux/Apache/Mysql/PHP stack. Linux being the OS, Apache the web server layer, Mysql provides the database, and PHP the dynamic HTML/Scripting language. Their is an amazing amount of LAMP based applications out there, so getting to know how to administer a LAMP server is a key skill set for running Linux Application servers.

This first article will focus on installing the LAMP stack on an Ubuntu machine, and administering the Apache web server.

There are a few of concepts I would like to cover first though. One of them seems to escape a lot of people in this space. This is a the concept of name based virtual hosts. Name based virtual hosting is the ability for a web server to serve content based on the URL of the incoming request. This is a method of allowing a single server to serve multiple websites content without needing multiple IP addresses. Essentially the server processes the URL request by matching it against a know set of virtual host definitions. When it matches a URL to a virtual host, it serves content from the directory structure assigned to that virtual host.

The other concept is fully qualified domain names. A fully qualified domain name or FQDN is a hostname containing the host and the domain, including the top level domain suffix. Common top level domains (TLDs) are .gov, .com, .org etc. So an example of a FQDN would be: The www is the host, and is the domain, so indicates the host called www in the domain

The last concept is how what part of a URL is handled by the domain name system (DNS). DNS is responsible for resolving the part of a URL that is between the http:// and the next / in a URL. If there are no following /’s in a URL than DNS processes all of the URL after the http://. I am a DNS administrator for a large retailer and I am constantly asked to add an entry to add a redirect in DNS to allow, for instance, a URL called to redirect to Since DNS only handles the “” or “” everything after the / in the destination, ie /test/test.html is not handled by DNS, and therefore DNS cannot do this sort of redirection. In this case, I can modify DNS to do a redirect for to go to the name (or its corresponding server IP address) but its up to the virtual hosting definition or, a bit of redirect html code to handle the rest. Ok, enough said about that.

First, in order to install the LAMP stack on an Ubuntu system, we need to make sure we install the associated packages:

sudo apt-get install apache2 php5-mysql libapache2-mod-php5 mysql-server php5-gd phpmyadmin

There are some commands that are very useful to control Apache features and served content. The first set of commands will control the availability of Apache features. For instance, you might want to enable home directory public_html serving, which is the ability to serve content from a users public_html folder in their home directory. Content is then accessed at the web browser by going to http://server ip/~username. To do this, use the a2enmod command:

sudo a2enmod userdir && sudo /etc/init.d/apache2 restart

Other apache modules can be enabled with this a2enmod command. Here is an example that covers server side includes (we will cover server side includes in a later article):

sudo a2mod include && sudo /etc/init.d/apache2 restart

The default document root is /var/www (you need superuser privileges to write in this directory), any files in this directory that are world readable will be accessible by entering the following in a web browser:

http://server ip or name/filename

Anything in a subdirectory below /var/www will appear by appending the directory name to the url. For example, if there is a directory with content at /var/www/mywebsite, it would be accessible with the following url:

http://server/mywebsite/filename where server is the name or IP of your apache server.

New sites can be added by creating site definition files in the /etc/apache2/sites-available directory. Files in this directory are essentially apache configuration files that can be read in or included when apache starts up. This is useful for adding new websites via virtual host definitions, or enabling SSL for your sites.
Edit a new configuration file for the new site :

gksudo gedit /etc/apache2/sites-available/<filename>

Here is an example of a site definition file for adding a directory phpsysinfo as a virtual host on the server, accessible via the url.

<Directory “/var/www/phpsysinfo”>

# Possible values for the Options directive are “None”, “All”,
# or any combination of:
#   Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews
# Note that “MultiViews” must be named *explicitly* — “Options All”
# doesn’t give it to you.
# The Options directive is both complicated and important.  Please see
# for more information.

#-Indexes disables directory browsing +Includes turns on SSI

Options -Indexes FollowSymLinks +Includes

# AllowOverride controls what directives may be placed in .htaccess files.
# It can be “All”, “None”, or any combination of the keywords:
#   Options FileInfo AuthConfig Limit
AllowOverride All

# Controls who can get stuff from this server.
Order allow,deny
Allow from all

#add this to allow for different default page names

DirectoryIndex index.html index.shtml index.php


<VirtualHost *:80>

DocumentRoot /var/www/phpsysinfo


Note :80 in VirtualHost statement – needed if running combination of ssl and non-ssl sites. We will break down the components of this file later.

To enable the new site use the a2ensite command:

sudo a2ensite <sitename> && sudo /etc/init.d/apache2 restart

Later, if you would like to disable that site, you can use the a2dissite command to remove the site:

sudo a2dissite <sitename> && sudo /etc/init.d/apache2 restart

Note, that if the site content is at /var/www or a directory below it, you do not have to create a site file.  The site file is only used to include other directories other than what is below /var/www or to create virtual hosts for making content available at a specific host name.

So what is happening with this a2ensite/a2dissite commands?  Essentially, a2ensite makes symbolic links in the /etc/apache2/sites-enabled/ directory for the appropriate file/site being enabled (from the /etc/apache2/sites-available directory).  The a2dissite command simply deletes the symbolic links.

Beware of some web applications that install themselves outside of the normal /var/www and /etc/apache2/sites-available directory structures. An example comes to mind: phpmyadmin. Phpmyadmin is a php based tool for administering Mysql servers. Phpmyadmin on ubuntu does not install in the /var/www folder. Instead it installs in /usr/share/phpmyadmin. It also stores configuration in /etc/phpmyadmin. In the /etc/phpmyadmin folder there is an apache.conf file. This file is executed when Apache is started, and it includes the /usr/share/phpmyadmin directory in the web servers directory structure, so that it appears at http://servername/phpmyadmin, as if it was a directory under /var/www. How does it do this? Another directory, /etc/apache2/conf.d includes symlinks to files to be included in the apache configuration. Its actually very similar to sites-enabled/available. In /etc/apache2/conf.d/ there is a phpmyadmin.conf that is symlinked to /etc/phpmyadmin/apache.conf. These symlinks are created by apt during package installation.

So, in short we have covered installing the LAMP stack, and controlling Apache’s configuration for serving content. In the next article, we will cover creating Secure Sockets Layer (SSL) websites using apache.

Been gone for awhile….

Hi – not long after I started this blog, I lost a dear friend. This loss has kept me from blogging for a couple of weeks. I had known him for 16 years, and our time together on this earth was very special, and things won’t be quite the same now.

His name was Snoopy, and he was my dog. I bought Snoopy from a puppy farm in Smyrna, Tennessee in November of 1994. He lived with me and was my best friend through homelessness during a divorce and other trials. He never gave up on me and always met me with a wagging tail. I have no children, but Snoopy in many ways was my child. My wife and I acquired a couple of other beagles, Bowser and Scout and for 5 years we have been a three dog family, with Snoopy being the oldest and in many ways the most special.

We are now down to two dogs, and the house seems a bit emptier than it had been. Snoopy developed a problem with his larynx and was having issues breathing. Our vet warned us that he could stop breathing and we should consider a date to have him put to sleep. Two Wednesdays ago, early in the morning Snoopy woke us up in convulsions, unable to breathe. My wife saved his life by doing exactly what the vet had instructed us to do, pull on his tongue to open his larynx again. We rushed Snoopy to the emergency vet where he starting acting like himself again, but the vet felt he wasn’t getting enough oxygen since his tongue was purple. We took Snoopy back home and stayed up the rest of the night with him until we could call our normal vet and schedule an appointment for doing the inevitable.

You see, the next time his throat would collapse would probably be the end. The look of fear in his eyes the first time was too much to bear, and death by suffocation would be a painful and fearful way for him to go. We decided the time had come to say goodbye to a wonderful companion and friend for all those years. We held him as the vet gave him the injection and made sure his passing was as comfortable as it could be.

So, I make this entry in my blog as a tribute to him, and hope that someone else will read this and by doing so may learn a little about who he was. We will miss him dearly and have cried many a tear. A friend told me the other day, that “They count on us to do the right thing”. I do hope and believe that I will meet him again someday – farewell my friend and rest in peace. I bet he is playing ball with the angels right now….

Snoopy - A special dog

Client IPSEC VPNs with Linux and Juniper Netscreen

Today, as promised I am going to show everyone how to set up a client IPSEC VPN to a Juniper Netscreen FW/VPN appliance from a Linux machine.  Juniper is a market leader in the Firewall and VPN space, and provides appliances from the Small office Home Office footprint all the way up to the largest enterprise data center gateways.  The small office version, currently an SSG 5, is based on Juniper’s ScreenOS.  This tutorial only covers ScreenOS configuration.  Juniper is now marketing a new platform called the SRX series. These units run JUNOS, which is Junipers Router OS outfitted with VPN and Firewall functionality pulled from ScreenOS.

The Linux system examples I show should work on Ubuntu (and Gentoo, provided your kernel has support for IPSEC configured).

So why VPN?  As I have mentioned in previous posts a VPN is a secure method for using a public network for private communications.  Most VPNs fall in to type types:  IPSEC and SSL.  We will be covering IPSEC VPNs in this posting.  I typically use a VPN for access to my employer’s network while on the road so I can be productive while traveling.  With the right technology, you can gain access to your home network the same way.

I have a small home network that is protected by a Juniper Netscreen SSG5.  This SSG 5 is configured to allow one or more “client” vpns to connect to it.  Essentially a client VPN is a single PC talking to a LAN via a VPN server or gateway, as opposed to a site to site VPN, which generally connects two networks or LANs together over the internet.

Here is a little background on IPSEC.

IPSEC VPNs generally consist of two phases.   Phase 1 is an identification phase, where two IPSEC gateways identify each other.  If the identification is successful (the two gateways trust each other) then Phase 2 which is the ‘tunnel’ phase, can occur.  During this phase, the two gateways negotiate the IP subnet traffic that will be allowed to traverse the tunnel and how to encrypt that traffic.  Generally this traffic is protected by using 3DES or AES encryption, which dynamic key rotation.  IPSEC, if set up properly is very hard to compromise.

Phase 1 typically uses UDP port 500 to perform IKE/ISAKMP negotiations.  Phase 2, in our case, will use IPSEC ESP (Encapsulated Security Payload) which is a transport layer protocol (like TCP and UDP, runs at layer 4 of of the OSI model). If a NAT firewall is detected in between the two gateways, then ESP can be encapsulated in UDP port 4500 or UDP port 500 depending on the implementation used.  This encapsulation is called IPSEC NAT traversal, or NAT-T.

We will start with the following example network layout:

Diagram of Example VPN NetworkFirst we will start with the Netscreen configuration, which is best performed from the netscreen command line.  Keep in mind that this will work on several Netscreen models sold over the last few years.  This includes the Netscreen 5XT, and 5GT.  These units can be found on Ebay for less than $100, and are highly recommended for use as a home or small office firewall.

Note:  It is best to have a static IP for the VPN gateway side of the connection.  Dynamic will work, however it will become difficult to track the ip as the ISP reassigns addressing to the Juniper via DHCP or other means.  Newer Juniper Netscreens can support dynamic dns registration with DDNS providers such as, which would make tracking the ip easier.

So, lets get started.  From an ssh or telnet session, login to your netscreen.  The first thing we need to do is define VPN users and a group – here is an example of creating a user “rwalters”.  This user id is used for IKE negotiation.

Phase 1: (IKE gateway negotiation)

User definition:

set user "rwalters" uid 1
set user "rwalters" ike-id u-fqdn "" share-limit 1
set user "rwalters" type ike
set user "rwalters" "enable"

After creating a user, we should add that user to a group.  While not required, if you want more than one client VPN to be active at one time, you should add your user to a group, as the group will be used in the IKE gateway definition, and any of the users in the group will be allowed to authenticate in Phase 1.

Here we create a group called dialupusers:

set user-group "dialupusers" id 1
set user-group "dialupusers" user "rwalters"

Next we will define the IKE Phase 1 gateway definition.  In most cases, we use pre-shared key authentication, which is basically a password.  Other forms of credentials can be used as well such as RADIUS or X-AUTH but those are beyond the scope of this tutorial.  Here is an example of the ScreenOS command for IKE gateway definition:

set ike gateway "Publicdialupvpn" dialup "dialupusers" Aggr outgoing-interface "ethernet0/0" preshare "<password>" proposal "pre-g2-3des-sha"
set ike gateway "Publicdialupvpn" nat-traversal udp-checksum
set ike gateway "Publicdialupvpn" nat-traversal keepalive-frequency 5

The first command creates an IKE gateway called “Publicdialupvpn” and associates the “dialupusers” group with it.  It also defines the outgoing interface, ethernet0/0 and the preshared key or password.  You should use a complex password in place of the <password> shown in the command.    The Aggr means aggressive mode, which is used when the IP address of one of the gateways is dynamic – this case the laptop will almost always have a dynamic IP address.  The remaining commands set nat-traversal capabilities.  Without NAT-T, Phase 2 will not come up if a NAT-Router or firewall is present in the middle of the network.  The last piece is the encryption used for IKE traffic – in this case 3DES, with SHA-1 hashing algorithm.

Next, we will define the Phase 2 tunnel.  An example is below:

set vpn "Publicdialupvpn" gateway "Publicdialupvpn" no-replay tunnel idletime 0 proposal "g2-esp-3des-sha"

This command is pretty simple.  It defines a VPN (or tunnel) called Publicdialupvpn, using the IKE gateway definition of the same name. It also turns on no-replay, which prevents replaying of traffic (a common method of hacking VPNs) and sets the transport to ESP, and encryption to 3DES, with SHA-1 hashing.

The last piece of Netscreen configuration is the Firewall policy to allow the encrypted traffic to the internal network.  This policy is actually used as part of phase 2 as well, since phase 2 requires the exchange and agreement on the ip addresses that will be allowed to traverse the tunnel (known as a policy based VPN in Juniper terminology).

In this example we will create an address object for the local LAN:

set address "Trust" "Local LAN"

Then we will create a policy to allow it to be tunnelled to from the outside.  Note that Dial-Up VPN is a default address book entry that ships with ScreenOS.  Note that you can limit the ports/protocols allowed through the tunnel by changing the “ANY” to a specific service, such as http, for instance.

set policy id 8 from "Untrust" to "Trust" "Dial-Up VPN" "Local LAN" "ANY" tunnel vpn "Publicdialupvpn" id 0x3 log
set policy id 8
set log session-init

So, that should take care of the Netscreen side of the equation.  Next we will tackle the Linux side which is the fun part.  The beauty of Linux IPSEC is that its one of those built in features that you would have to pay for if using one of those operating systems made in Redmond, WA.

Linux IPSEC has a few requirements.  If using Ubuntu, all the kernel requirements are already fulfilled.  If you’re using Gentoo or a custom kernel, make sure the following is set in your kernel config:

Under networking:


Under Cypto/Block:


Under Crypto/Ciphers:


Linux IPSEC is handed by the kernel in conjunction with two different packages:  ipsec-tools, and racoon.  Both of these must be installed and configured for our VPN to work properly.  On Ubuntu you can install these with:

apt-get install ipsec-tools

apt-get install racoon

On gentoo, just a simple:

emerge ipsec-tools

Should install the necessary software.

Three files files need to be customized for Linux IPSEC VPNs to work with a Juniper Netscreen.  The first file, ipsec-tools.conf, usually resides in /etc, and the second, psk.txt in /etc/racoon. The third file, is racoon.conf, which is also in the /etc/racoon directory.

The ipsec-tools.conf file handled Phase 2 security association, and the racoon.conf/psk.txt file provide IKE (Phase 1) and dynamic re-keying of encryption between two VPN endpoints.  Here is an example of an ipsec-tools.conf file that would work for our sample VPN diagram and Juniper configuration above.

#!/usr/sbin/setkey -f


spdadd any
-P out ipsec esp/tunnel/;

spdadd any
-P in ipsec esp/tunnel/;

The information in this file is pretty straight forward.  Essentially its a tunnelling policy.  It basically states that all traffic to, which is our remote LAN behind the Juniper from (the Internet IP of our laptop) be tunnelled through the esp tunnel between (the internet IP of our Juniper) and (our laptop) and vice versa.  Note that is file must be updated with the existing IP address on the laptop or remote PC every time a VPN is started. For instance if your laptop gets an ip address of from the ISP your using to connect to the internet, all of the ip addresses in the ipsec-tools.conf file will have to be changed to

The other two files /etc/racoon/psk.txt, and /etc/racoon/racoon.conf are relatively static.   The first file, psk.txt, is essentially a list of IP addresses of remote VPN gateways and the pre-shared key to use for a password when doing IKE with that gateway.

The password in this file should match the password used in the netscreen Phase 1 IKE configuration shown above.  Here is a sample of that file:

# IPv4/v6 addresses    <your pre-shared key password>    mekmitasdigoat    0x12345678    whatcertificatereally
3ffe:501:410:ffff:200:86ff:fe05:80fa    mekmitasdigoat
3ffe:501:410:ffff:210:4bff:fea2:8baa    mekmitasdigoat
# USER_FQDN    mekmitasdigoat
# FQDN    hoge

The last file, racoon.conf controls Phase 1 IKE negotiation, Phase 2 VPN SA setup and VPN re-key.  This file rarely changes for a particular vpn definition.

Here is a sample racoon.conf:

path pre_shared_key "/etc/racoon/psk.txt";

# Remote host
exchange_mode aggressive;

# Change this to your local ID
my_identifier user_fqdn "";
lifetime time 28800 sec;
proposal {
encryption_algorithm 3des;
hash_algorithm sha1;
authentication_method pre_shared_key;
dh_group modp1024;

# A sample sainfo section
# Create one for each subnet you want to access, etc.
#sainfo address any address any
sainfo anonymous
pfs_group modp1024;
lifetime time 3600 sec;
encryption_algorithm 3des;
authentication_algorithm hmac_sha1;
compression_algorithm deflate;

Generally there are two sections to racoon.conf.  The first section controls IKE parameters, and Phase 1 negotiations.  The second section controls Phase 2 negotiations.  There are a couple of things I would like to point out about.  First, notice the local id or user fqdn.  This should be the same as one of the users created on the Juniper side.   The second item is the lifetime and encryption types.  The first lifetime value in the IKE/Phase 1 section dictates the length of time the two gateways will trust each other without re-identification.  The Phase 2 section has its own lifetime parameter as well.  This controls the re-key time on the tunnel, in this case every 3600 seconds or one hour, the tunnel encryption keys will be renegotiated.

Note also the “aggressive mode” statement – this should match the Phase 1 definition on the Juniper as well.

So, after the files have been configured correctly, you need to start the VPN.  The first step is to run the ipsec-tools.conf file, so it should be executable.  The last step is to start racoon.  Here are the example commands.


/etc/init.d/racoon start

After the racoon daemon is started, try to ping something on the remote LAN – for example,  The ping will start the IPSEC negotiations.  Watching your /var/log/daemon.log or /var/log/messages will show you what is happening.

So – what if it doesn’t work?  Double check your configuration.  Juniper ScreenOS event log (command: get event) is very helpful in determing what is happening.  The linux /var/log/daemon.log and /var/log/messages will also be helpful.

Ok – so this seems like a lot of work, modifying files and starting services – I agree its not quite optimal, at least on the Linux side.  So, I wrote a couple of scripts – one to determine your PC/Laptop’s IP address, create the ipsec-tools.conf file, run it, and then run racoon.  As a bonus, it also replaces your resolv.conf with another resolve.conf – in case there is a dns server for your remote network you would like to use to be able to resolve machine names on your protected LAN.  The second script clears the ipsec-tools.conf policies, and stops racoon, and replaces the resolv.conf with the original one.

With the scripts below, starting a VPN to your office or home lan is as simple as connecting to the internet, and running a the script.  Afterwards, the to undo the changes, you run another script.  This way, there are only one time changes needed to /etc/racoon/psk.txt and /etc/racoon/racoon.conf

Here is the script for starting the vpn:


#script to find outgoing internet interface (by opening a socket to
#and build a vpn policy file for ... and turn up the VPN tunnel

import socket
import os

def OutputSpace2file():
filehandle.write ( ' ' )
s = socket.socket()
# Connect to to find out going IP address
ipport = s.getsockname()
ipaddr = ipport[0]
destip = ''
destsubnet = ''
print "Setting VPN tunnel up from Source:",ipaddr,"To IP address:",destip
print "For destination subnet",destsubnet
print "Generating ipsec-tools.conf file in /etc"
filehandle = open ('/etc/ipsec-tools.conf','w')
filehandle.write ( '#!/usr/sbin/setkey -fn' )
filehandle.write ( 'n')
filehandle.write ( 'flush;n' )
filehandle.write ( 'spdflush;nn' )
filehandle.write ( '#outboundn' )
filehandle.write ( 'spdadd ' )
filehandle.write ( ipaddr )
filehandle.write ( destsubnet )
filehandle.write ( 'anyn' )
filehandle.write ( '    -P out ipsec esp/tunnel/' )
filehandle.write ( ipaddr )
filehandle.write ( '-' )
filehandle.write ( destip )
filehandle.write ( '/require;nn' )
filehandle.write ( '#inboundn' )
filehandle.write ( 'spdadd ' )
filehandle.write ( destsubnet )
filehandle.write ( ipaddr )
filehandle.write ( 'anyn' )
filehandle.write ( '    -P in ipsec esp/tunnel/' )
filehandle.write ( destip )
filehandle.write ( '-' )
filehandle.write ( ipaddr )
filehandle.write ( '/require;n' )
# set permissions on new policy file
rc = os.system( 'chmod a+x /etc/ipsec-tools.conf' )
# Check return code for permissions command
if rc != 0:
print "Error Setting permissions!"
# Continue by runninng ipsec-tools.conf script
print "Running ipsec-tools.conf script."
rc = os.system( '/etc/ipsec-tools.conf' )
# Check return code for command
if rc != 0:
print "Error running ipsec-tools.conf script!"
# Continue by reloading racoon
print "Reloading racoon IKE server."
rc = os.system( '/etc/init.d/racoon reload' )
# Check return code for command
if rc != 0:
print "Error running /etc/init.d/racoon reload!"

# backup existing /etc/resolv.conf and build new one for remote lan

print "Backing up /etc/resolv.conf and setting resolv.conf to DNS"
rc = os.system( 'mv /etc/resolv.conf /etc/resolv.conf.bak' )
if rc != 0:
print "Error backing up /etc/resolv.conf"
filehandle = open ('/etc/resolv.conf','w')
filehandle.write ( '# File created by LPU scriptn' )
filehandle.write ( 'search homelan.comn' )
filehandle.write ( 'nameserver' )

And for stopping the VPN:


#script to stop vpn started by and restore resolv.conf

import socket
import os

# stop vpn by flushing ipsec policies
print "Flushing IPSEC policies"
rc = os.system( '/usr/sbin/setkey -FP' )
# Check return code for permissions command
if rc != 0:
print "Error flushing policies!"
# Continue by runninng ipsec-tools.conf script
print "Restoring /etc/resolv.conf to previous state"
rc = os.system( 'cp /etc/resolv.conf.bak /etc/resolv.conf' )
# Check return code for command
if rc != 0:
print "Error restoring /etc/resolv.conf - name resolution will not work properly!"
# Continue by reloading racoon
print "Stopping racoon IKE server."
rc = os.system( '/etc/init.d/racoon stop' )
# Check return code for command
if rc != 0:
print "Error running /etc/init.d/racoon stop!"

So, there you have it.  I use this type of vpn set up regularly to access my home network when on the road.  So far, I haven’t found anything that keeps it from working, unless the hotel ISP network assigns my laptop a 192.168.0 IP.  This of course would render the vpn useless, since you can’t tunnel from to as it will confuse the ip stack and routing processes.  Feel free to use these scripts in your own environment.

While some purists may say, why not use linux for both ends of the tunnel. While this can be done, its hard to beat the price and usability of Juniper’s Netscreen line. It is an excellent firewall, and used units can be found on Ebay for less than $100 now – look for Netscreen 5GT or 5XT.

I have tried to cover some key concepts of IPSEC VPNs, but this is by no means a complete overview, but more of a implementation for a specific application. There are several IPSEC references available on the internet via this Wikipedia article.

Network Emulation with Linux Netem

Back in the day, I used an open source program called NistNET to emulate a WAN for my company’s network test lab on a linux machine.  I was able to solve a multitude of issues and test our applications in a WAN environment with this product.  Unfortunately, NistNET is no longer maintained, and until recently I had no open source tool for emulating a network in my arsenal.  The other day, while playing Call of Duty 2 with some friends on my dedicated linux server, I decided that I was tired of having an unfair advantage since my latency to the server was 1 ms, and they all had 50-70 ms or more, so I went on a search for something I could use to add delay to my connection to the server (one of my buddies says I am too honest).  After some searching, I came upon netem, which to my surprise is and has been part of the linux kernel for some time.  I know, some of you linux guys and gals out there are saying “tell me something I don’t know”  but, ashamedly, I didn’t know about this one perhaps because I don’t do much of that kind of work any more.

Anyway, using Netem, I could do exactly what I wanted.  I can add enough delay to my client to game server traffic (actually, its server to client, which I will explain later) to make it seem like I have a lot more network between the server and me than I actually do.  Whether or not this decreases my advantage in the game is yet to be seen.

Anyway, several “effects” are present on most wide area networks today.  A common effect, latency, can have a drastic effect on the way network communications protocols behave.  Latency is also one of the key issues with playing on line games, especially those that require fast reaction to on screen events.  High latency creates what gamers refer to as lag.

The Netem function of linux provides the capability to modify the parameters of egress traffic (ie, traffic exiting the machine and destined for another point on on the network).  With netem, its possible to create artificial delay, thus creating latency.  Other possibilities are rate limiting (controlling the amount of bandwidth traffic can use), causing packet loss, and jitter.  Packet loss can result in very poor performance with TCP applications.  Jitter, also known as variable delay, is bad for real time streaming applications such as voice over IP.

Anyway, you could probably see why this kind of stuff would be important to a network engineer, especially in a lab environment.

So – on to how to use netem.  Netem is controlled by the tc command, which is part of the iproute2 package and is included in with most linux distributions.

Using the tc command, we can easily tell a linux host to delay all packets exiting a network interface using this command:

tc qdisc add dev eth0 root netem delay 80ms

This will add 80ms of delay to all packets leaving the eth0 interface.  To test the result of this command, just do a ping from you machine before issuing the command, and then after:

ping -n

PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=0.103 ms
64 bytes from icmp_seq=2 ttl=64 time=0.394 ms
64 bytes from icmp_seq=3 ttl=64 time=0.070 ms
64 bytes from icmp_seq=4 ttl=64 time=0.101 ms
--- ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2998ms
rtt min/avg/max/mdev = 0.070/0.167/0.394/0.131 ms

Enter the tc command for adding delay to eth0:

tc qdisc add dev eth0 root netem delay 80ms

Then ping again:

ping -n
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=80.0 ms
64 bytes from icmp_seq=2 ttl=64 time=80.0 ms
64 bytes from icmp_seq=3 ttl=64 time=80.0 ms
64 bytes from icmp_seq=4 ttl=64 time=80.4 ms
--- ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 80.073/80.164/80.414/0.246 ms

Notice the difference in delay (~80ms).

We can also add variable delay (jitter) as most wide area networks (such as the internet) have some jitter associated with them.  The following command will add +/- 10ms of jitter to the 80ms delay shown in the last example:

tc qdisc add dev eth0 root netem delay 80ms 10ms

Now lets do the ping again:

PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=72.6 ms
64 bytes from icmp_seq=2 ttl=64 time=84.6 ms
64 bytes from icmp_seq=3 ttl=64 time=86.7 ms
64 bytes from icmp_seq=4 ttl=64 time=84.0 ms
--- ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 72.648/82.023/86.752/5.510 ms

Looks even more like a real internet connection now.

To see what qdisc (short for queuing discipline) parameters have been applied to an interface (in this case eth0) use the following command:

tc qdisc show dev eth0

Sample output follows:

qdisc netem 8003: root limit 1000 delay 80.0ms  10.0ms

The last part of the output shows that a delay of 80ms +/- 10ms is applied.

Now, for the important part – how do you turn this off? It took a while to find this in the netem documentation:

tc qdisc del dev eth0 root

This will remove all queuing discipline parameters from the eth0 interface on your system.

So this is great, but not necessarily what I am looking for.  Adding delay wholesale to the server would also increase my fellow gamers latency as well as mine, and the idea is to level the playing field.

That is ok, since netem/tc has a way to only place qdisc’s on specific traffic.  In my test network, I have two machines.  One running Windows 7 (this case the Call of Duty Client) and one running Ubuntu 9.10 (the COD2 server).  The Windows machine has an IP of, and the server

On the linux server, I run the following commands as root:

tc qdisc add dev eth0 root handle 1: prio
tc qdisc add dev eth0 parent 1:3 handle 30: netem delay 140ms 10ms distribution normal
tc filter add dev eth0 protocol ip parent 1:0 prio 3 u32 match ip dst flowid 1:3

This set of commands creates a simple priority queuing discipline, attaches a basic latency netem at hook 3, and then tells all traffic to to be priority 3, thus subject to the netem delay of 140ms +/- 10ms (with a normal statistical distribution of jitter).

These commands do exactly what I was wanting – making my delay to the game server about equal to my friends.

So far, it seems to work, however it’s not optimal.  The reason its not optimal is because only the packets coming from the server to my Windows client machine are being delayed.  A true internet connection would have delay in both directions.  Since netem only affects the egress of traffic from a network interface, technically you would have to delay the traffic as it leaves the client PC, and delay the traffic as it leaves the server back towards the client.   Since Windows doesn’t have a netem facility (at least not without some expensive commercial software such as that from Shunra) the best way to do this would be to run Call of Duty 2 on Linux using wine (which is another article for a another time).  That way I could induce delay on both machines, and get a “more perfect”  simulation of the internet.

To show existing filters such as those set by the last set of commands you can use the following commands:

tc filter show dev eth0


tc qdisc show dev eth0

Here is an example output:

tc filter show dev eth0

filter parent 1: protocol ip pref 3 u32
filter parent 1: protocol ip pref 3 u32 fh 800: ht divisor 1
filter parent 1: protocol ip pref 3 u32 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:3
match c0a8000f/ffffffff at 16

tc qdisc show dev eth0

qdisc prio 1: root bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc netem 30: parent 1:3 limit 1000 delay 140.0ms  10.0ms

In summary, netem is a perfect example of what I love most about Linux – flexibility and utility beyond what most commercial Operating Systems (especially those from Redmond) offer out of the box for free. While the way I have implemented it is not quite perfect, netem can provide what I am looking for in terms of simulating network conditions. As I mentioned above, perfection could be achieved by using linux as the client as well as the server, or by making a bridge between the two machines with a dual interface linux machine doing netem delay on both interfaces.

Netem has several capabilities that I didn’t cover, which can be found at the following links:

The Linux Foundation has an overview page here.
Here is pdf file showing more details on netem and tc usage.

Next up – Client IPSEC VPNs from a linux laptop to a Juniper Netscreen VPN/FIrewall device. Coming Soon!

SSH Tunnelling (aka Poor Man’s VPN)

Tunnelling of TCP traffic can be performed from the ssh command on Linux or with Putty on Windows, and can be thought of as a poor man’s VPN.  A VPN is a virtual private network, or a method of using a public network such as the internet to  securely transmit data via an encrypted “tunnel”.

VNC is a method of gaining access to a remote GUI on Linux and Windows machines.  VNC is typically considered insecure and not recommended for use on the open internet.  With an SSH tunnel, this doesn’t have to be an issue, as ssh provides security to an otherwise insecure protocol.

Here is an example of how to use VNC over an SSH tunnel:

Start VNC server on a Linux host ssh server to only listen to Loopback interface:

vncserver :1 -localhost

On client machine, start ssh with the following command line:

ssh -L 5901:localhost:5901 <server ip> [-l <login>]

The to access VNC via the SSH tunnel, use the following command on the client machine:

vncviewer localhost:1

What happens?

The ssh process on the client sets up a TCP port redirection on the loopback interface of port 5901, to the loopback interface on the server machine on TCP port 5901.  The vncviewer command connects to the the redirected port on the local loopback interface, which then gets directed over the tunnel to the server machine’s loopback on port 5901, where the vncserver is listening.

This will allow vnc protocol to be securely tunnelled across the SSH connection.

The previous examples showed connecting to services running on the ssh server itself.  Its also possible to use the SSH server to redirect traffic to other machines on the network behind it.

Sometimes we might have to access a Windows Machine behind an a linux SSH server that is connected to the internet (such as linux system performing firewalling for a home network). We can use SSH tunnelling to connect to Windows Remote Desktop as well. (I know this is a linux blog, but most of us out there still have to deal with Windows from time to time)

The following example assumes the following network layout:

Client PC –> Internet–>SSH Server on Firewall–>Private Network–> Windows XP
(                                                                                                        (
(ssh server can be behind firewall as long as its accessible from Internet)

1. Make sure Windows XP host is running RDP

2. On client PC, start SSH with tunnelling as follows: 4000:

On Putty this tunnel definition looks like this (click add after completing the boxes):

Which is exactly like the ssh command on Linux:

ssh -L 4000: <server> -l <userid>

To connect to the RDP service on the internal Windows XP system, from a client Windows system connected to the internet via the ssh connection:

Use Remote Desktop Connection application that comes with XP, but use this as the address to connect to:

Like the previous example, this causes the program to connect to port # 4000 on the local loopback interface, which then is redirected to the machine on port number 3389 at the other end of the ssh tunnel.  Port # 4000 is used to avoid conflicting with port #3389 on the client as it could have its own RDP server running.

Any TCP based communications can be tunnelled this way over ssh, creating a secure connection for any unsecure protocol. This is also a mechanism for bypassing firewall rules.  As long as the SSH server traffic is allowed (TCP port 22).  It can be used to gain access to other ports that might not be allowed by a local firewall, simply by using a remote ssh server as a proxy for other traffic.

Boot Sector Management

As promised, tonight we explore boot sector management on X86 style hardware.  Anyone who works with PC hardware long enough, and especially those using linux as primary or secondary OS in a dual boot configuration will find this information valuable.

The system boot sector on x86 style hardware is crucial to being able to boot a linux system on this common platform.  Occaisonally the boot sector becomes corrupted or needs to be backed up.  In the days of MS DOS systems, a command was used to “restore” the boot sector.  The command was


Essentially this would re-write the boot sector on the primary hard disk.

The dd command can be used to perform similar functions, however as is usual with Linux, more boot sector related tasks can be accomplished.

First of all lets review the structure of a boot sector or master boot record on a PC hard disk:

Format of the boot sector:

Size (bytes) Description
446 Executable code section
4 Optional Disk signature
2 Usually nulls
64 Partition table
2 MBR signature

The first 446 bytes of the boot sector contain executable code that is loaded by the BIOS and then executed, and is where OS boot loaders and boot managers (such as grub) store their initial code.  Its also an area of the disk that can become corrupted, or replaced during operating system installs.

The other part of the boot sector that is significant is the partition table.  This is where the disk partition information is stored.  This should not be modified by anything other than a disk partitioning utility such as fdisk.  It can be backed up for data security reasons though.  The total bytes in the master boot record comes to 512.  With dd, simply reading or writing the first 446 or 512 bytes of the disk device will read or write the master boot record.

Scenario 1:  Backup the boot sector (or MBR)

If the first harddisk in the system is /dev/sda, to backup the boot sector the following command can be used:

# dd if=/dev/sda of=bsbackup.bin bs=512 count=1

Essentially this command will read the first 512 bytes of /dev/sda and write it to the file bsbackup.bin.

Scenario 2: Restore the boot sector from a file:

# dd if=bsbackup.bin of=/dev/sda bs=512 count=1

This will restore the boot sector to /dev/sda that was backed up in Scenario 1.

Scenario 3:  Zero out the boot sector (leaving the partition table intact)

Sometimes a virus or other issue can leave a corrupted executable code section in the MBR.  I have personally seen a boot sector that would not store grub information (and thus boot linux after its installed) properly until the first 446 bytes were zeroed out and grub re-installed.  The following command will do just that:

# dd  if=/dev/zero of=/dev/sda bs=446 count=1

Scenario 4:  Zero out the entire MBR (this will erase the partition table as well – effectively destroying the ability to easily access data on the drive)

A variation of the last dd command will wipe out the master boot record entirely.  You will have to repartition and reformat your hard disk after this:

# dd if=/dev/zero of=/dev/sda bs=512 count=1

In summary, the use of dd for boot sector management is a handy tool to have in your linux arsenal.

Next up are some networking topics, such as SSH tunneling, IPSEC VPNs.  Keep watching the site, or subscribe to our RSS Feed.

Disk and Partition Imaging using dd

Linux provides an abundance of advanced command line tools to manage and modify just about anything on your system.  Today we will explore the use of dd, the primary tool on linux for creating and restoring disk images, among other things.

The dd (diskdump) on Linux can be used to backup an entire disk or partition to an image file. Several caveats apply to this method:

  1. The disk in question can not be in use by an operating system
  2. A destination medium or network resource must be present that is large enough to hold the image.

To backup a disk using dd, the following procedure can be used.

  1. Boot the computer with the disk in question from a Linux Live CD, such as Ubuntu or Knoppix
  2. Mount a destination disk (such as a usb disk drive or nfs mount)
  3. Run dd command to backup disk
  4. Note the size of the disk partition if partitioning a new device is necessary when restoring the image

Here is an example session, to back up a single partition (sda1) containing a Windows XP installation to a USB hard disk mounted at /mnt/sdb1:

As a root user, do the following:

Mount USB disk drive

# mount -t ext3 /dev/sdb1 /mnt/sdb1

Run dd command (piping output through gzip to save space):

# dd if=/dev/sda1 conv=sync,noerror bs=64k | gzip -c > /mnt/sdb1/windowsxp-c.img.gz

Definition of the dd command parameters:

“if=/dev/sda1” is the input file for the dd command, in this case, its linux device sda1
“conv=sync,noerror instructs dd that if it can’t read a block due to an error then it should at least write something to its output of the correct length.

Even if your Hard disk exhibits no errors, dd will read every single block, including any which the OS avoids because it has marked them as bad.

“bs=64k” is the block size of 64 kilobytes. Using a large block size speeds up the copy process. The output of this is then passed to gzip for compression and storage in a file on the destination device.

Noting Partition configuration:

Using the command fdisk -l /dev/<device> where <device> is the device node of the disk being backed up, make note of the number of blocks used to create the partition:

# fdisk -l /dev/sda

Disk /dev/sda: 959.9 GB, 959966085120 bytes
255 heads, 63 sectors/track, 116709 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x2e2d2e2d

Device Boot Start  End   Blocks     Id System
/dev/sda1*      1  15298 122881153+ 7  HPFS/NTFS
/dev/sda2   15299  51771 292969372+ 83 Linux
/dev/sda3   51772  52767   8000370  82 Linux swap / Solaris
/dev/sda4   52768 116709 513614115  83 Linux

The destination disk should have a partition defined identical to the source partition, total number of blocks is the important parameter here.

The partition geometry information can be backed up the the USB hard disk with the following command:

# fdisk -l /dev/sda > /mnt/sdb1/sda_fdisk.txt

Restoring a dd image to a disk/partition

The steps are similar to the backup process:

  1. Boot computer with destination disk from a Linux Live CD
  2. Partition the destination disk, if needed
  3. Mount the source media (usb disk or nfs mount)
  4. Use gunzip and dd to restore image to disk or partition

Here is an example session, to restore the image taken with the above steps:

As a root user, do the following:

Mount image source (USB hard disk at /dev/sdb1)

# mount -t ext3 /dev/sdb1 /mnt/sdb1

Partition destination disk:

# fdisk /dev/<device node in question>; in our case, sda.

<create partition if needed>

Restore image (destination partition is /dev/sda1)

# gunzip -c /mnt/sdb1/windowsxp-c.img.gz | dd of=/dev/sda1 conv=sync,noerror bs=64k

Note: On a fast machine, ie C2Q 6600, and 3ware RAID disk array, a 120GB image takes 25 minutes to create.

In order to have a bootable system, some other configuration may be needed such as restoring a boot block. Check out the next post for details on boot sector management with dd.

A excellent guide on using fdisk for disk partitioning can be found here.

Hello, world!

Welcome to!  With this blog, I intend to publish my experiences using Linux as a desktop and server operating system.  I have been using Linux since 1995 and around 2000 I completely replaced Windows with it as my primary personal computing environment.

I use linux for personal productivity, gaming, multimedia, programming, and as a server operating system, and I hope to share my knowledge of this powerful environment here on this blog.

A little about myself:

I am IT systems professional with 17 years of experience in design, maintenance and management of enterprise networks.  I am currently responsible for planning and strategy of Network infrastructure and Unified Communications technology investments for a major retail company based in the United States.

My favorite Linux Distros: Ubuntu, Gentoo.

Hobbies:  Linux, Data Communications, Computer Programming, Amateur Astronomy, Snow Skiing, Boating and Water Skiing, Biking, Computer Gaming, Beagles (I have three), Electric Guitar, Science Fiction Novels, Aviation

Favorite Music: Pink Floyd

Enjoy, hopefully you will find something valuable here if your interested in using Linux!  Stay tuned for more posts, as I have a bunch of content to share.