Thursday, April 30, 2009

Lazy Linux: 10 essential tricks for admins

Learn these 10 tricks and you'll be the most powerful Linux systems administrator in the universe...well, maybe not the universe, but you will need these tips to play in the big leagues. Learn about SSH tunnels, VNC, password recovery, console spying, and more. Examples accompany each trick, so you can duplicate them on your own systems.

The best systems administrators are set apart by their efficiency. And if an efficient systems administrator can do a task in 10 minutes that would take another mortal two hours to complete, then the efficient systems administrator should be rewarded (paid more) because the company is saving time, and time is money, right?

The trick is to prove your efficiency to management. While I won't attempt to cover that trick in this article, I will give you 10 essential gems from the lazy admin's bag of tricks. These tips will save you time—and even if you don't get paid more money to be more efficient, you'll at least have more time to play Halo.

Trick 1: Unmounting the unresponsive DVD drive

The newbie states that when he pushes the Eject button on the DVD drive of a server running a certain Redmond-based operating system, it will eject immediately. He then complains that, in most enterprise Linux servers, if a process is running in that directory, then the ejection won't happen. For too long as a Linux administrator, I would reboot the machine and get my disk on the bounce if I couldn't figure out what was running and why it wouldn't release the DVD drive. But this is ineffective.

Here's how you find the process that holds your DVD drive and eject it to your heart's content: First, simulate it. Stick a disk in your DVD drive, open up a terminal, and mount the DVD drive:

# mount /media/cdrom
# cd /media/cdrom
# while [ 1 ]; do echo "All your drives are belong to us!"; sleep 30; done

Now open up a second terminal and try to eject the DVD drive:

# eject

You'll get a message like:

umount: /media/cdrom: device is busy

Before you free it, let's find out who is using it.

# fuser /media/cdrom

You see the process was running and, indeed, it is our fault we can not eject the disk.

Now, if you are root, you can exercise your godlike powers and kill processes:

# fuser -k /media/cdrom

Boom! Just like that, freedom. Now solemnly unmount the drive:

# eject

fuser is good.

Trick 2: Getting your screen back when it's hosed

Try this:

# cat /bin/cat

Behold! Your terminal looks like garbage. Everything you type looks like you're looking into the Matrix. What do you do?

You type reset. But wait you say, typing reset is too close to typing reboot or shutdown. Your palms start to sweat—especially if you are doing this on a production machine.

Rest assured: You can do it with the confidence that no machine will be rebooted. Go ahead, do it:

# reset

Now your screen is back to normal. This is much better than closing the window and then logging in again, especially if you just went through five machines to SSH to this machine.

Trick 3: Collaboration with screen

David, the high-maintenance user from product engineering, calls: "I need you to help me understand why I can't compile supercode.c on these new machines you deployed."

"Fine," you say. "What machine are you on?"

David responds: " Posh." (Yes, this fictional company has named its five production servers in honor of the Spice Girls.) OK, you say. You exercise your godlike root powers and on another machine become David:

# su - david

Then you go over to posh:

# ssh posh

Once you are there, you run:

# screen -S foo

Then you holler at David:

"Hey David, run the following command on your terminal: # screen -x foo."

This will cause your and David's sessions to be joined together in the holy Linux shell. You can type or he can type, but you'll both see what the other is doing. This saves you from walking to the other floor and lets you both have equal control. The benefit is that David can watch your troubleshooting skills and see exactly how you solve problems.

At last you both see what the problem is: David's compile script hard-coded an old directory that does not exist on this new server. You mount it, recompile, solve the problem, and David goes back to work. You then go back to whatever lazy activity you were doing before.

The one caveat to this trick is that you both need to be logged in as the same user. Other cool things you can do with the screen command include having multiple windows and split screens. Read the man pages for more on that.

But I'll give you one last tip while you're in your screen session. To detach from it and leave it open, type: Ctrl-A D . (I mean, hold down the Ctrl key and strike the A key. Then push the D key.)

You can then reattach by running the screen -x foo command again.


Trick 4: Getting back the root password

You forgot your root password. Nice work. Now you'll just have to reinstall the entire machine. Sadly enough, I've seen more than a few people do this. But it's surprisingly easy to get on the machine and change the password. This doesn't work in all cases (like if you made a GRUB password and forgot that too), but here's how you do it in a normal case with a Cent OS Linux example.

First reboot the system. When it reboots you'll come to the GRUB screen as shown in Figure 1. Move the arrow key so that you stay on this screen instead of proceeding all the way to a normal boot.


Figure 1. GRUB screen after reboot
GRUB screen after reboot

Next, select the kernel that will boot with the arrow keys, and type E to edit the kernel line. You'll then see something like Figure 2:


Figure 2. Ready to edit the kernel line
Ready to edit the kernel line

Use the arrow key again to highlight the line that begins with kernel, and press E to edit the kernel parameters. When you get to the screen shown in Figure 3, simply append the number 1 to the arguments as shown in Figure 3:


Figure 3. Append the argument with the number 1
Append the argument with the number 1

Then press Enter, B, and the kernel will boot up to single-user mode. Once here you can run the passwd command, changing password for user root:

sh-3.00# passwd
New UNIX password:
Retype new UNIX password:
passwd: all authentication tokens updated successfully

Now you can reboot, and the machine will boot up with your new password.

Trick 5: SSH back door

Many times I'll be at a site where I need remote support from someone who is blocked on the outside by a company firewall. Few people realize that if you can get out to the world through a firewall, then it is relatively easy to open a hole so that the world can come into you.

In its crudest form, this is called "poking a hole in the firewall." I'll call it an SSH back door. To use it, you'll need a machine on the Internet that you can use as an intermediary.

In our example, we'll call our machine blackbox.example.com. The machine behind the company firewall is called ginger. Finally, the machine that technical support is on will be called tech. Figure 4 explains how this is set up.


Figure 4. Poking a hole in the firewall
Poking a hole in the firewall

Here's how to proceed:

  1. Check that what you're doing is allowed, but make sure you ask the right people. Most people will cringe that you're opening the firewall, but what they don't understand is that it is completely encrypted. Furthermore, someone would need to hack your outside machine before getting into your company. Instead, you may belong to the school of "ask-for-forgiveness-instead-of-permission." Either way, use your judgment and don't blame me if this doesn't go your way.

  2. SSH from ginger to blackbox.example.com with the -R flag. I'll assume that you're the root user on ginger and that tech will need the root user ID to help you with the system. With the -R flag, you'll forward instructions of port 2222 on blackbox to port 22 on ginger. This is how you set up an SSH tunnel. Note that only SSH traffic can come into ginger: You're not putting ginger out on the Internet naked.

    You can do this with the following syntax:

    ~# ssh -R 2222:localhost:22 thedude@blackbox.example.com

    Once you are into blackbox, you just need to stay logged in. I usually enter a command like:

    thedude@blackbox:~$ while [ 1 ]; do date; sleep 300; done

    to keep the machine busy. And minimize the window.

  3. Now instruct your friends at tech to SSH as thedude into blackbox without using any special SSH flags. You'll have to give them your password:

    root@tech:~# ssh thedude@blackbox.example.com .

  4. Once tech is on the blackbox, they can SSH to ginger using the following command:

    thedude@blackbox:~$: ssh -p 2222 root@localhost

  5. Tech will then be prompted for a password. They should enter the root password of ginger.

  6. Now you and support from tech can work together and solve the problem. You may even want to use screen together!
Trick 6: Remote VNC session through an SSH tunnel

VNC or virtual network computing has been around a long time. I typically find myself needing to use it when the remote server has some type of graphical program that is only available on that server.

For example, suppose in Trick 5, ginger is a storage server. Many storage devices come with a GUI program to manage the storage controllers. Often these GUI management tools need a direct connection to the storage through a network that is at times kept in a private subnet. Therefore, the only way to access this GUI is to do it from ginger.

You can try SSH'ing to ginger with the -X option and launch it that way, but many times the bandwidth required is too much and you'll get frustrated waiting. VNC is a much more network-friendly tool and is readily available for nearly all operating systems.

Let's assume that the setup is the same as in Trick 5, but you want tech to be able to get VNC access instead of SSH. In this case, you'll do something similar but forward VNC ports instead. Here's what you do:

  1. Start a VNC server session on ginger. This is done by running something like:

    root@ginger:~# vncserver -geometry 1024x768 -depth 24 :99

    The options tell the VNC server to start up with a resolution of 1024x768 and a pixel depth of 24 bits per pixel. If you are using a really slow connection setting, 8 may be a better option. Using :99 specifies the port the VNC server will be accessible from. The VNC protocol starts at 5900 so specifying :99 means the server is accessible from port 5999.

    When you start the session, you'll be asked to specify a password. The user ID will be the same user that you launched the VNC server from. (In our case, this is root.)

  2. SSH from ginger to blackbox.example.com forwarding the port 5999 on blackbox to ginger. This is done from ginger by running the command:

    root@ginger:~# ssh -R 5999:localhost:5999 thedude@blackbox.example.com

    Once you run this command, you'll need to keep this SSH session open in order to keep the port forwarded to ginger. At this point if you were on blackbox, you could now access the VNC session on ginger by just running:

    thedude@blackbox:~$ vncviewer localhost:99

    That would forward the port through SSH to ginger. But we're interested in letting tech get VNC access to ginger. To accomplish this, you'll need another tunnel.

  3. From tech, you open a tunnel via SSH to forward your port 5999 to port 5999 on blackbox. This would be done by running:

    root@tech:~# ssh -L 5999:localhost:5999 thedude@blackbox.example.com

    This time the SSH flag we used was -L, which instead of pushing 5999 to blackbox, pulled from it. Once you are in on blackbox, you'll need to leave this session open. Now you're ready to VNC from tech!

  4. From tech, VNC to ginger by running the command:

    root@tech:~# vncviewer localhost:99 .

    Tech will now have a VNC session directly to ginger.

While the effort might seem like a bit much to set up, it beats flying across the country to fix the storage arrays. Also, if you practice this a few times, it becomes quite easy.

Let me add a trick to this trick: If tech was running the Windows® operating system and didn't have a command-line SSH client, then tech can run Putty. Putty can be set to forward SSH ports by looking in the options in the sidebar. If the port were 5902 instead of our example of 5999, then you would enter something like in Figure 5.


Figure 5. Putty can forward SSH ports for tunneling
Putty can forward SSH ports for tunneling

If this were set up, then tech could VNC to localhost:2 just as if tech were running the Linux operating system.

Trick 7: Checking your bandwidth

Imagine this: Company A has a storage server named ginger and it is being NFS-mounted by a client node named beckham. Company A has decided they really want to get more bandwidth out of ginger because they have lots of nodes they want to have NFS mount ginger's shared filesystem.

The most common and cheapest way to do this is to bond two Gigabit ethernet NICs together. This is cheapest because usually you have an extra on-board NIC and an extra port on your switch somewhere.

So they do this. But now the question is: How much bandwidth do they really have?

Gigabit Ethernet has a theoretical limit of 128MBps. Where does that number come from? Well,

1Gb = 1024Mb; 1024Mb/8 = 128MB; "b" = "bits," "B" = "bytes"

But what is it that we actually see, and what is a good way to measure it? One tool I suggest is iperf. You can grab iperf like this:

# wget http://dast.nlanr.net/Projects/Iperf2.0/iperf-2.0.2.tar.gz

You'll need to install it on a shared filesystem that both ginger and beckham can see. or compile and install on both nodes. I'll compile it in the home directory of the bob user that is viewable on both nodes:

tar zxvf iperf*gz
cd iperf-2.0.2
./configure -prefix=/home/bob/perf
make
make install

On ginger, run:

# /home/bob/perf/bin/iperf -s -f M

This machine will act as the server and print out performance speeds in MBps.

On the beckham node, run:

# /home/bob/perf/bin/iperf -c ginger -P 4 -f M -w 256k -t 60

You'll see output in both screens telling you what the speed is. On a normal server with a Gigabit Ethernet adapter, you will probably see about 112MBps. This is normal as bandwidth is lost in the TCP stack and physical cables. By connecting two servers back-to-back, each with two bonded Ethernet cards, I got about 220MBps.

In reality, what you see with NFS on bonded networks is around 150-160MBps. Still, this gives you a good indication that your bandwidth is going to be about what you'd expect. If you see something much less, then you should check for a problem.

I recently ran into a case in which the bonding driver was used to bond two NICs that used different drivers. The performance was extremely poor, leading to about 20MBps in bandwidth, less than they would have gotten had they not bonded the Ethernet cards together!

Trick 8: Command-line scripting and utilities

A Linux systems administrator becomes more efficient by using command-line scripting with authority. This includes crafting loops and knowing how to parse data using utilities like awk, grep, and sed. There are many cases where doing so takes fewer keystrokes and lessens the likelihood of user errors.

For example, suppose you need to generate a new /etc/hosts file for a Linux cluster that you are about to install. The long way would be to add IP addresses in vi or your favorite text editor. However, it can be done by taking the already existing /etc/hosts file and appending the following to it by running this on the command line:

# P=1; for i in $(seq -w 200); do echo "192.168.99.$P n$i"; P=$(expr $P + 1);
done >>/etc/hosts

Two hundred host names, n001 through n200, will then be created with IP addresses 192.168.99.1 through 192.168.99.200. Populating a file like this by hand runs the risk of inadvertently creating duplicate IP addresses or host names, so this is a good example of using the built-in command line to eliminate user errors. Please note that this is done in the bash shell, the default in most Linux distributions.

As another example, let's suppose you want to check that the memory size is the same in each of the compute nodes in the Linux cluster. In most cases of this sort, having a distributed or parallel shell would be the best practice, but for the sake of illustration, here's a way to do this using SSH.

Assume the SSH is set up to authenticate without a password. Then run:

# for num in $(seq -w 200); do ssh n$num free -tm | grep Mem | awk '{print $2}';
done | sort | uniq

A command line like this looks pretty terse. (It can be worse if you put regular expressions in it.) Let's pick it apart and uncover the mystery.

First you're doing a loop through 001-200. This padding with 0s in the front is done with the -w option to the seq command. Then you substitute the num variable to create the host you're going to SSH to. Once you have the target host, give the command to it. In this case, it's:

free -m | grep Mem | awk '{print $2}'

That command says to:

  • Use the free command to get the memory size in megabytes.
  • Take the output of that command and use grep to get the line that has the string Mem in it.
  • Take that line and use awk to print the second field, which is the total memory in the node.

This operation is performed on every node.

Once you have performed the command on every node, the entire output of all 200 nodes is piped (|d) to the sort command so that all the memory values are sorted.

Finally, you eliminate duplicates with the uniq command. This command will result in one of the following cases:

  • If all the nodes, n001-n200, have the same memory size, then only one number will be displayed. This is the size of memory as seen by each operating system.
  • If node memory size is different, you will see several memory size values.
  • Finally, if the SSH failed on a certain node, then you may see some error messages.

This command isn't perfect. If you find that a value of memory is different than what you expect, you won't know on which node it was or how many nodes there were. Another command may need to be issued for that.

What this trick does give you, though, is a fast way to check for something and quickly learn if something is wrong. This is it's real value: Speed to do a quick-and-dirty check.

Trick 9: Spying on the console

Some software prints error messages to the console that may not necessarily show up on your SSH session. Using the vcs devices can let you examine these. From within an SSH session, run the following command on a remote server: # cat /dev/vcs1. This will show you what is on the first console. You can also look at the other virtual terminals using 2, 3, etc. If a user is typing on the remote system, you'll be able to see what he typed.

In most data farms, using a remote terminal server, KVM, or even Serial Over LAN is the best way to view this information; it also provides the additional benefit of out-of-band viewing capabilities. Using the vcs device provides a fast in-band method that may be able to save you some time from going to the machine room and looking at the console.

Trick 10: Random system information collection

In Trick 8, you saw an example of using the command line to get information about the total memory in the system. In this trick, I'll offer up a few other methods to collect important information from the system you may need to verify, troubleshoot, or give to remote support.

First, let's gather information about the processor. This is easily done as follows:

# cat /proc/cpuinfo .

This command gives you information on the processor speed, quantity, and model. Using grep in many cases can give you the desired value.

A check that I do quite often is to ascertain the quantity of processors on the system. So, if I have purchased a dual processor quad-core server, I can run:

# cat /proc/cpuinfo | grep processor | wc -l .

I would then expect to see 8 as the value. If I don't, I call up the vendor and tell them to send me another processor.

Another piece of information I may require is disk information. This can be gotten with the df command. I usually add the -h flag so that I can see the output in gigabytes or megabytes. # df -h also shows how the disk was partitioned.

And to end the list, here's a way to look at the firmware of your system—a method to get the BIOS level and the firmware on the NIC.

To check the BIOS version, you can run the dmidecode command. Unfortunately, you can't easily grep for the information, so piping it is a less efficient way to do this. On my Lenovo T61 laptop, the output looks like this:

#dmidecode | less
...
BIOS Information
Vendor: LENOVO
Version: 7LET52WW (1.22 )
Release Date: 08/27/2007
...

This is much more efficient than rebooting your machine and looking at the POST output.

To examine the driver and firmware versions of your Ethernet adapter, run ethtool:

# ethtool -i eth0
driver: e1000
version: 7.3.20-k2-NAPI
firmware-version: 0.3-0

Conclusion

There are thousands of tricks you can learn from someone's who's an expert at the command line. The best ways to learn are to:

  • Work with others. Share screen sessions and watch how others work—you'll see new approaches to doing things. You may need to swallow your pride and let other people drive, but often you can learn a lot.
  • Read the man pages. Seriously; reading man pages, even on commands you know like the back of your hand, can provide amazing insights. For example, did you know you can do network programming with awk?
  • Solve problems. As the system administrator, you are always solving problems whether they are created by you or by others. This is called experience, and experience makes you better and more efficient.

I hope at least one of these tricks helped you learn something you didn't know. Essential tricks like these make you more efficient and add to your experience, but most importantly, tricks give you more free time to do more interesting things, like playing video games. And the best administrators are lazy because they don't like to work. They find the fastest way to do a task and finish it quickly so they can continue in their lazy pursuits.


Linux on a roll in mobile phones

It's only been about two years since Linux started becoming a significant factorin mobile phones, an arena that has been dominated by Symbian, Microsoft, and proprietary operating systems. With the burgeoning complexity of mobile phones, feature phones, and smart phones -- plus increasing time-to-market pressures -- there's a clear movement toward off-the-shelf, third-party operating systems based on industry standards, and Linux figures to be a major beneficiary of that trend.

How rosy is the mobile communications picture for Linux? Early indications in just the first two months of this year are that it will be very positive indeed. Consider Motorola, for example, one of the big three in mobile handsets, which brought its first Linux phone (pictured) to market in 2003. That company is expecting to introduce between eight and ten new Linux phones in 2005, according to the Taiwanese daily newspaper DigiTimes, representing more than 25 percent of the company's planned introductions for the year.

Today, smartphones running Linux represent over ten percent of Motorola's mobile phone sales in China, where it enjoys the number one market position. And China, of course, is the biggest market for mobile phones today. Motorola sources its Linux from MontaVista, as do two other major mobile phone vendors: NEC and Panasonic.

Yet Linux has had virtually no impact on the mobile phones being sold in the US. Nevertheless, with the (pictured), a heavily multimedia-oriented device, Motorola recently bowed its first Linux phone for the US market, which may presage more to come.

Trolltech certainly thinks so. That company is the provider of the Qtopia development environment and graphical user interface (pictured) used by many Linux mobile phone makers. Earlier this month, Trolltech CEO Haavard Nord told LinuxDevices.com that 2005 would be a "breakout" year for Linux mobile phones and predicted that over twenty new devices were on the way, representing a new market "surge" for Linux handsets. Notable among these will be the first Linux phone from Ningbo Bird, the largest Chinese mobile phone manufacturer and exporter, expected to be launched by the middle of this year.

Early 2005 also saw the completion of PalmSource's acquisition of China MobileSoft and the adoption of several of its software products, including a Linux software stack for smartphones. Previously, in December of 2004, PalmSource had announced its plans to migrate to Linux in pursuit of the feature phone and smart phone markets, as well as its intention to soon offer the PalmOS as a middleware and application stack for Linux mobile phones.

Elsewhere during early 2005, Texas Instruments bowed a mobile phone reference design that includes an embedded Linux software stack, and Sky MobileMedia Inc. announced the integration of its SKY-MAP software platform with MontaVista's embedded Linux operating system. According to Sky CEO Richard Sfeir, Linux is becoming "the operating platform of choice for handset manufacturers requiring a robust and high performing operating system." Many feature-phone makers are "migrating to Linux for higher performance products," he said.

Moreover, there was one other bit of news in early 2005 that bodes well for the future of Linux in mobile phones: the revelation at the 3GSM conference in February that a second of the big three mobile phone makers, Samsung Electronics, has collaborated on a reference design for a 3G Linux smartphone (pictured) with Infineon Technologies, Trolltech, and Emuzed. That design includes not just a Samsung application processor and camera module, but a Samsung optimized Linux kernel as well.

As for Nokia, it is the only one of the leading trio of mobile phone makers that has made no noise about Linux. That's not very surprising, however, since Nokia holds a major stake in Symbian, a vendor that Linux interests are trying to displace.

The balance of this article provides links to LinuxDevices.com coverage relating to the use of Linux in mobile phones. Enjoy . . . !

Setting up Squid as your caching HTTP/FTP proxy

Squid is a proxy caching server for HTTP/FTP requests. It caches data off
the net on your local network. So the next time the same data is being
accessed, whether it is html or a gif, it gets served up from the local
server rather than over the Internet -- saving you significant bandwidth.

Lets use the most commonly available proxy server for Linux and the most
stable one around, Squid. Installing and configuring it is a breeze as
you'll soon find out. To make things simpler I would suggest that you get
the Squid RPM from any of the download on the net for your distro.The
latest Stable release of Squid is squid-2.3.STABLE1-5.i386.rpm. If you are
not able to find it on your distro's CD then i would suggest you try out
www.rpmfind.net. After having downloaded the RPM install it with the
following command.

Assuming you have downloaded the squid-2.3.STABLE1-5.i386.rpm release
the installation command is as follows.

bash# rpm -ivh squid-2.3.STABLE1-5.i386.rpm

And please do note that "bash#" stands for the shell prompt and you do not

need to replicate it in your command.

Having installed Squid sucessfully, now open the file /etc/squid.conf
using your favourite text editor. Some distributions put this file in
/etc/squid/. This is where it gets interesting and confusing too so read
carefully.

Scroll down till you come to the line

#http_port 3128

This option sets your HTTP proxy port to 3128 which is the default port
that squid runs on. You can uncomment this line and set it to whatever
port you want. It is advisable to avoid port 80 since, if you are running
a Web Server on the Linux machine Apache would be listening on that port.

Scroll down till you come to the line

#cache_mem 8 MB

This option sets a limit on the amount of memory that squid may make use
of to store it's transient and cached objects temporarily in memory. This
limit that you may impose on squid is the soft limit and at any given
point of time Squid may double or triple the size of occupied memory all
depending on the size and the requirement of in-transient cached objects.
Uncomment this line and change the size of your Memory cache from 8 MB to
what ever size you want it too be. Keep in mind the amount of RAM that you
have on your machine when you allocate memory to SQUID. But for your
knowledge this occupancy of the specified Memory Limit is dynamic.

Scroll down until you come to the following lines

# LOGFILE PATHNAMES & CACHE DIRECTORIES
#--------------------------------------------------------------------------

The following options are relative to setting up and tuning your web
cache. So lets get gunning pals. Here the first and most important one.

#cache_dir /var/squid/cache 100 16 256

Isn't this getting a little confusing, one parameter and three values to
it! C'mon let's demystify the whole thing. The values given here are the
values the Squid will use by default. So if the 'cache_dir' option isn't
implicitly mentioned then Squid resolves to maintaining the cache in
/var/squid/cache. Uncomment this option 'cache_dir' if you want to
customize the parameters.

The first parameter '/var/squid/cache' is the path to the cache files. You
may change this to suit whatever you want too, but remember whatever path
you may mention out here make sure that those directories exist because
Squid will never create the directories on it's own. Also a point to be
noted is that the directories should be writable by the Squid process. If
you are a novice and all of this is sounding too geekish then I suggest
you stick to the default values.

The next value '100' is the amount of space in MegaBytes(MB) that Squid
can utilize to store the cache contents. Modify this to whatever you think
is appropriate to suit your needs.

The next value referred to as 'Level-1' is the number of sub-directories
that Squid can create under the current Cache directory. I suggest that
for starters leave this as it is.

The next option is referred to as 'Level-2' is the number of Second Level
directories that Squid can create under each 'Level-1' directory. The
default is fine for the moment.

Scroll down till you come to the line:

# ACCESS CONTROLS
# -----------------------------------------------------------------------------

The following lines define Access Control Lists for your Network. Squid
allows you to define various kinds of ACL's out here. So make it a point
to read this whole section of Access Controls carefully.

In this "ACCESS CONTROLS" section scroll down till you come to the
following lines.

#Default configuration:
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR
# CLIENTS
#
http_access deny all

What you need to do out here, is to setup your own ACL's (Access Control
Lists), else just comment out the last line as shown above and put the
following line in.

http_access allow all

So now your rule section should look like this.

# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR
# CLIENTS

#http_access deny all
http_access allow all

Three cheers and your proxy has been setup. Now you only need to make sure
that Squid starts every time your Linux box boots.

If your using RedHat then you can start in the following manner. Login as
Admin or "su" to root & use the "setup" command. Now enter the System
Services sub menu and enable Squid.

If your using SuSE then start YaST and go to "System Administration", then
go to "Change config File", then scroll down till you come to "START
SQUID" & just enable it from "NO" to "YES".

The next time you reboot your machine your Proxy will start automatically.

Before you start using squid you need to create the swap directories. Do
this by

/usr/sbin/squid -z

This just has to be done the first time.

To start Squid right now use the following command

bash#/etc/rc.d/init.t/squid start

There, you have setup, configured and started your proxy. Just make sure
your client's web browsers have the http proxy port setup as the same port
in your /etc/squid.conf file.

Running Microsoft Windows inside Debian: qemu

here are many legitimate reasons for a Debian GNU/Linux user to wish to run Microsoft Windows applications. One approach involves using the wine program to run a single Windows executable in a fake Windows environment. An alternative is to run an entire Windows operating system within a Debian host. Qemu is a procesor emulator and virtualization program which allows you to do just that.

Qemu is available for Debian's unstable distribution, and can be installed from the source code available on its homepage for Woody. It is comparible to the commercial software VMWare albeit with a few features missing and a lower performance.

On the plus side it is evolving fast, and doesnt require complicated setup or kernel patching. It is also free - although if you wish to run an installation of Windows you will need a valid license to do so.

Qemu is a complete CPU and peripheral emulator which can be used to run entire operating systems as a user process, supported operating systems include Linux distributions such as Debian, RedHat, SuSE, varieties of Microsoft Windows such as Windows 98, Windows 2000, and Windows XP and BSD based operating systems. There are some disk images available from the Free OS Zoo website.

Whilst it's not as fast as running an operating system directly on the same hardware because of the overhead of virtualising running as a guest operating system is suprisingly responsive on my AMD XP 2800+ machine, once the slow installation is achieved.

This brief guide will walk you through installing Windows 2000 as a guest operating system on your Debian box.

First of all you need to install qemu, by running as root:

apt-get install qemu

Once this is done we're ready to start the installation process.

As qemu is a virtualization program it doesnt touch your real discs, instead you give it a big file and tell the system to use that for it's C:.

As a simple start we'll set aside a blank 2Gb file for Windows to install into, we can create that easily enough:

skx@undecided:~$ dd of=hd.img bs=1024 seek=2000000 count=0
0+0 records in
0+0 records out
0 bytes transferred in 0.000493 seconds (0 bytes/sec)

That's given us a file called hd.img which is 2000000 bytes long, close enough to 2Gb for us to proceed.

The next thing we need to do is have a Windows 2000 CD-ROM handy, we have two choices here either place it in your CD-ROM drive, or use an ISO image.

We'll go with the former.

We want to tell the system that it's first hard drive should be the big empty file we have just created, that the CD-ROM drive should be read from the drive we have - and that it should boot from CD-ROM.

skx@undecided:~$ qemu -boot d -cdrom /dev/cdrom  -hda hd.img
QEMU 0.6.0 monitor - type 'help' for more information

The '-boot d' flag tells the system to boot from the CD-ROM drive we've specified, the '-hda hd.img' tells the system that the first hard drive should be the contents of the file hd.img which we created previously.

This should bring up a window upon your desktop within which you'll see Windows boot. You can click in the window to give it focus, and when you wish to return the mouse to your desktop press "Ctrl + Shift". Pressing Ctrl + Shift + f will toggle you between fullscreen and windowed mode.

Now you can sit back and install Windows as you normally would. Some parts will be very slow, other parts such as formatting the drive will be lightening fast!

Whilst using the guest operating system is acceptably responsive for me the actual installation took a couple of hours. Most of this is waiting for the thing to finish, but it's something to be aware of.

I found that when I installed Windows 2000 it seemed to go faster if I ran it fullscreen and shut down as many open programs as I could.

When it came to networking I found that Debian doesn't allow non-root users to write to the tun driver by default, as root run:

chgrp users /dev/net/tun
chmod g+w /dev/net/tun

(If you don't have that device file you will need to run these commands, this assumes you're running Kernel 2.6.x)

mkdir -p /dev/net
mknod /dev/net/tun c 10 200

Finally we add in the module to enable the device :

modprobe tun
echo 'tun' >> /etc/modules

Now that you're installed the operating system you can create a backup of the image by simply copying the 'hd.img' file which is being used as the disk drive:

skx@undecided:~$ cp hd.img pristine.img

Any time you wish to restore back simply overwrite the hd.img with the pristine one - you'll never have to reinstall again!

Now that we've done the installation we can start the system for real with:

skx@undecided:~$ qemu -hda hd.img -boot c

From bootup to login prompt takes me 39 seconds, which is pretty impressive.

Networking should be setup properly for you in the sense that on the host machine you will have the interface tun0 setup.

Once that's done you need to setup some way for the emulated machine to talk to the world, or it's host at least.

I chose to give the host machine an IP address on it's own network. We do this by first setting up an address on the host, then on the guest.

I use 10.0.0.1 for the host, and 10.0.0.2 for the Windows system.

On the host run:

root@undecided:~# ifconfig tun0 10.0.0.1 up

Then on the host adjust the networking so that the Windows operating system has the ip address 10.0.0.2, with the gateway set to point to 10.0.0.1.

This should allow you to ping both the guest from the host, and vice versa.

If you wish the host to be able to talk to the internet generally run the following on the host:

root@undecided:~# iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
root@undecided:~# echo "1" >/proc/sys/net/ipv4/ip_forward

Wednesday, April 29, 2009

A comprehensive command guide to Debian’s APT-GET and DPKG

Purpose: Debian has a very powerful package management system called APT. Learning some useful commands can really unleash the true power and usefulness of this package management system. From time to time I will add commands and other tips and tricks that will be helpful to solve some issues and get work done faster. The idea is to make this post a COMPREHENSIVE command guide for APT package management.

Note: For most of the examples, I have used “traceroute” as an example package wherever possible. In some scenarios I have used other packages for the example since traceroute was not suitable for those.

APT-GET Commands

  • To install a package. For example, let say you would like to install traceroute package:

#apt-get install traceroute

  • To install a package’s source files. For example, let say you would like to download “traceroute” package’s source:

# apt-get source traceroute

  • To install dependencies of a package for building the package from it’s source. For example, before you start building a binary package (traceroute) from it’s source, you need to install the dependencies that are required to build the package from it’s source:

# apt-get build-dep traceroute

  • To build a package from it’s source:

# apt-get source traceroute
# cd traceroute-VERSION
# debuild -uc -us
# cd ..

  • To fix a system with incorrect/broken dependencies. Also useful if the apt-get was stopped unexpectedly due to crash or power failure:

# apt-get -f install

DPKG Commands

  • To reconfigure any package that is unpacked but not yet configured or half-configured state. This can be used along with “apt-get -f install”. Also useful in case of unexpected shutdown while upgrading the system.

# apt-get -f install
# dpkg --configure -a

  • To remove a package (this does not remove the configuration files the package):

# dpkg --remove traceroute

  • To remove a package (and its configuration files):

# dpkg --purge traceroute

  • To reconfigure a package. For example suppose you want to select a different settings for your X server:

# dpkg-reconfigure xserver-xorg

  • To identify the package name that produced a particular file. For example, “I would like to know which Debian package produced the file ‘lft.db’:

# dpkg -S lft.db
or
# dpkg --search lft.db

Output:

traceroute: /usr/bin/lft.db
traceroute: /usr/share/man/man8/lft.db.8.gz

  • To list all the files installed a particular package:

# dpkg --listfiles traceroute

  • To list all the packages installed on the system along with their state, name, version and a description:

# dpkg --list

  • To list all the packages installed on the system (only names):

# dpkg --get-selections

  • To troubleshoot error messages like the following:

Unpacking dictionaries-common (from …/dictionaries-common_0.98.12_all.deb) …
dpkg-divert: cannot open diversions: No such file or directory
dpkg: error processing /var/cache/apt/archives/dictionaries-common_0.98.12_all.deb (–unpack):
subprocess pre-installation script returned error exit status 2
Selecting previously deselected package aspell.
Unpacking aspell (from …/aspell_0.60.6-1_i386.deb) …
Processing triggers for man-db …
Errors were encountered while processing:
/var/cache/apt/archives/dictionaries-common_0.98.12_all.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)
debian-486:/var/lib/dpkg# ls

# touch /var/lib/dpkg/diversions

  • To troubleshoot error messages like following:

Errors were encountered while processing:
/var/cache/apt/archives/acpid_ 1.0.8-7.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)

Try following commands one by one:

# apt-get -f install
# apt-get upgrade
# apt-get dist-upgrade

# dpkg --configure -a
# apt-get -f install

# cd /var/lib/dpkg/info
# rm -rf acpid*
# apt-get install acpid

# cd /var/lib/apt/lists
# rm *
# apt-get update
# apt-get install acpid

# cd /var/cache/apt/archives
# rm acpid_ 1.0.8-7.deb
# apt-get install acpid

APT-CACHE Commands

  • To perform a full-text search on a package’s name, description, etc:

# apt-cache search traceroute

  • To print detailed information of a package:

# apt-cache show traceroute

  • To print a list of packages a given package (traceroute) depends on. For example, show me all the packages on which traceroute depends:

# apt-cache depends traceroute

Output:

traceroute
Depends: libc6
Conflicts: tcptraceroute
Conflicts:
Conflicts: traceroute-nanog

  • To print a list of packages that are dependent on a particular package. For example, show me all the packages that are dependent on “traceroute” package:

# apt-cache rdepends traceroute

Output:

Reverse Depends:
xorp
traceroute-nanog
traceroute-nanog
licq
traceroute-nanog
ksniffer
traceroute-nanog
iputils-tracepath
traceroute-nanog
gnome-nettool
traceroute-nanog
education-common
traceroute-nanog

  • To print detailed information of the versions available for a package and the packages that reverse-depends on it. For example, show me all the packages which depends on traceroute:

# apt-cache showpkg traceroute

Happy Debians! :)

CLI Magic: For geek cred, try these one-liners

In this context, a one-liner is a set of commands normally joined through a pipe (|). When joined by a pipe, the command on the left passes its output to the command on the right. Simple or complex, you can get useful results from a single line at the bash command prompt.

For example, suppose you want to know how many files are in the current directory. You can run:

rakesh@debian ~]$ ls | wc -l

That's a very simple example -- you can get more elaborate. Suppose you want to know about the five processes that are consuming the most CPU time on your system:

rakesh@debian ~]$ ps -eo user,pcpu,pid,cmd | sort -r -k2 | head -6

The ps command's o lets you specify the columns that you want to be shown. sort -r does a reverse order sort with the second column (pcpu) as reference (k2). head gets only the first six lines from the ordered list, which includes the header line. You can place pcpu as the first column and then omit the k2 option because sort by default takes the first column to do the sort. That illustrates how you may have to try several approaches on some one-liners; different versions and ways to manipulate the options may produce different results.

A common situation for Linux administrators on servers with several users is to get quick ordered user lists. One simple way to get that is with the command:

rakesh@debian ~]$ cat /etc/passwd | sort

If you just need the username, the above command returns too much information. You can fix it with something like this:

rakesh@debian ~]$ cat /etc/passwd | sort | cut -d":" -f1

The sorted list is passed to cut, where the d option indicates the field's delimiter character. cut breaks into pieces each line, and the first field f1 is the one that you need to display. That's better; it shows only usernames now. But you may not want to see all the system usernames, like apache, bin, and lp. If you just want human users, try this:

rakesh@debian ~]$ cat /etc/passwd | sort | gawk '$3 >= 500 {print $1 }' FS=":"

gawk evaluates each line from the output piped to it. If the third field -- the UID -- is equal or greater than 500 (most modern distros start numbering normal users from this number) then the action is done. The action, indicated between braces, is to print the first field, which is the username. The separator for field in the gawk command is a colon, as specified by the FS option.

Now suppose you have a directory with lots of files with different extensions, and you want to back up only the .php files, calling them filename.bkp. The next one-liner should do the job:

rakesh@debian ~]$ for f in *.php; do cp $f $f.bkp; done

This command loops through all the files in the current directory looking for those with .php extensions. Each file's name is held in the $f variable. A simple copy command then does the backup. Notice that in this example we used a semicolon to execute the commands one after another, rather than piping output between them.

What about bulk copy? Consider this:

rakesh@debian ~]$ tar cf - . | (cd /usr/backups/; tar xfp -)

It creates a tar package recursevely on the current directory, then pipes this package to the next command. The parenthesis creates a temporary subshell, changes to a different directory, then extracts the content of the package, which is the whole original directory. The p option on the last tar command preserves file properties like time and permissions. After completion, the shell context will be at the original directory.

A variant on the previous one-liner lets you do the same kind of backup on a remote server:

rakesh@debian ~]$ tar cf - . | ssh smith@remote.server tar xfp - -C /usr/backup/smith

Here, the command establishes an SSH remote session and untars the package with the C option, which changes the directory, in this case to /usr/backup/smith, where the extraction will be made.

grep and gawk and uniq, oh my!

Text processing is a common use for one-liners. You can accomplish marvelous things with the right set of commands. In the next example, suppose you want a report on incoming email messages that look like this:


rakesh@debian ~]$ cat incoming_emails
2008-07-01 08:23:17 user1@example.com
2008-07-01 08:25:20 user2@someplace.com
2008-07-01 08:32:41 somebody@server.net
2008-07-01 08:35:03 spam not recived, filtered
2008-07-01 08:39:57 user1@example.com
...

You are asked for a report with an ordered list of who received incoming messages. Many recipients would be repeated in the output of the cat command. This one-liner resolves the problem:

rakesh@debian ~]$ grep '@' incoming_email | gawk '{print $3}' | sort | uniq

grep filters the lines that contains a @ character, which indicates an email address. Next, gawk extracts the third field, which contains the email address, and passes it to the sort command. Sorting is needed to group the same recipients together because the last command, uniq, omits repeated lines from the sorted list. The output is shown below. Most text processing one-liners use a combination of grep, sed, awk, order, tr, cut, uniq, and other related commands.


somebody@server.net
user1@example.com
user2@someplace.com

If you like any of these one-liners but think they're too long to type often, you can create an alias for the command and put it in your .bashrc file. When you log in your session, anything inside this file will be run, so your personal aliases would be ready at anytime.

rakesh@debian ~]$ alias p5="ps -eo pcpu,user,pid,cmd | sort -r | head -6"

You can certainly create better and simpler variations of all of the commands in this article, but they're a good place to start. If you are a Linux system administrator, it's good practice to collect, create, and modify your own one-liners and keep them handy; you never know when are you going to need them. If you have a good one-liner, feel free to share it with other readers in a comment below.

Tuesday, April 28, 2009

Seamlessly integrate XP into Linux with SeamlessRDP

Today users have many choices for combining Linux and Windows on the same machine. You can go with a traditional dual-boot system in which the operating systems reside on different disk partitions but share a common partition for files, or you can use an emulator such as Wine, which lets you install Windows applications right in your Linux system. Virtualization programs, such as those from VMware, bring you closer to the more ideal solution of using both systems at once, but one is always the host and one is always the guest, shown inside a window. But by combining VMware Server with some free software, you can run Windows XP along with Linux, not inside a console window, but completely integrated into the Linux environment.

To make this work, you need three tools installed on your system. Though not open source, VMware Server is free as in beer; it requires a license number that you get from the same page where you download the program. (Of course you also need a copy of Windows XP to run under VMware Server.) rdesktop is a Remote Desktop Protocol client bundled with virtually every Linux distro, and Cendio's SeamlessRDP is a GPL-licensed utility that lets you integrate rdesktop with Windows XP.

With this solution, you're connecting to a virtual machine in the background, but you don't see a window frame or the Windows desktop. All you see is the Windows XP menu bar along with your regular KDE or GNOME menu bar, creating the illusion that both operating systems are working at the same time side by side. In Figure 1. below, notice the KDE menu bar on the top and the Windows XP menu bar on the bottom of the screen. You can launch applications from both.

To start, install Windows XP in VMware with the usual options, and make sure to set the network connection option to Network Address Translation (NAT). This simplifies the connection from the host machine. After you complete the Windows installation, log in and set a password for an account you've created that you'll call from Linux. You must allow remote connections to this Windows virtual machine, which you can do by going to Start -> Control Panel -> System (you may have to switch to the classic view). Once the system icon opens, go to the Remote tab and check "Allow users to connect remotely to this computer."

Now install VMware Tools for your Windows XP virtual machine. You must know which IP address the VMware DHCP server assigned to the virtual machine; to find it, open up a DOS console and type ipconfig.

Click to enlarge Next, install SeamlessRDP from within your Windows virtual machine. Open Internet Explorer and download the SeamlessRDP zip file. Create a directory under C: (C:\seamless) and extract the archive's content there. Unzip the three files into the directory; you'll use seamlessrdpshell.exe later.

Now you can log off the Windows session, but don't turn off the virtual machine. Once you have Windows displaying the Welcome screen, you can close the VMware Server console, leaving the Windows XP virtual machine session is alive in the background. A simple ps -ef | grep vmware proves it's still there.

Now it's time to use rdesktop. First, try to open a simple application, such as Notepad. Start a terminal session under Linux, and run this command from your xterm:

rdesktop -A -s "c:\seamless\seamlessrdpshell.exe notepad" 192.168.217.129 -u admin -p secret

Of course, change the IP address, username, and password to match your settings. If everything is OK, you should see the Notepad application pop up on your Linux system.

The -A option enables the SeamlessRDP mode that creates an X11 window for each application you launch. This option requires you to set a shell (-s) that launches the application indicated in the rdesktop command. Notice that you're using the directory you created and the SeamlessRDP application, c:\seamless\seamlessrdpshell.exe. The argument to this command is the Windows program that you wish to run. You need the full path if the program isn't in the regular path variable.

The -u and -p switches are optional. If you don't use them, the application will launch a Windows login screen asking for credentials.

Note that when you close Notepad or any other Windows-launched application, the rdesktop connection is still on. You must log out, because until you do, rdesktop won't be able to start other applications. Since you don't have a desktop and a Start menu from which to log off, you must go to the Windows XP virtual machine and press Ctrl-Alt-Del, then log off, or restart the virtual machine.

Once you know how to launch a Windows application from rdesktop with the SeamlessRDP option, try it with explorer.exe itself. This application creates a full desktop environment so users can interact mainly through the menu bar. If you run it "as is," it will pop up the full Windows XP desktop (including the wallpaper, icons, and shortcuts on the desktop).

If that's too intrusive for you, you can hack the Windows registry to get rid of the desktop and keep only the menu bar. Once you're in Windows XP again, launch the Registry editor by going to Start -> Run and typing regedit. Search for HKEY_CURRENT_USER -> Software -> Microsoft -> Windows -> CurrentVersion -> Policies -> Explorer. Once there, right-click on the right panel and select New -> DWORD Value. Name it NoDesktop, then click on it and change the data value to 1. Close the Registry editor and restart Windows.

When you turn off your Linux system, any virtual machine that is running in the background will obviously be lost, so you must start the VMware virtual machine and close the server console every time you want to connect to Windows this way. Before running the rdesktop command, consider moving your menu bar from the bottom of the screen to the top, because the Windows bar will sit there.

Now run the rdesktop command like this:

rdesktop -A -s 'c:\seamless\seamlessrdpshell.exe c:\windows\explorer.exe' 192.168.217.129 -u admin -p secret

Voilà! After a few seconds, you should have the Windows XP menu bar at the bottom of the screen, and you should be able to launch any application you have installed. You've created the illusion that both operating systems are working on the same machine at the same time. Very cool.

This trick doesn't work with just VMware virtual machines. It also works with Windows clients on your network and other virtual machine software. Simply install SeamlessRDP and configure Windows XP properly so that rdesktop can connect to it.

TIP: Pretending IceWeasel to be Firefox in Debian Linux

For those of you who have been using Debian Linux, you probably already know that the famous “Firefox” browser is called “IceWeasel” in Debian because of some licensing issues. There is nothing wrong with “IceWeasel” as only the name is changed. However there are some websites which will offer you different functionality based on the type of browser that you are using.

For example, if you would like to download Alexa Toolbar for IceWeasel in Debian Linux, you probably won’t able to because the Alexa website does not recognize “IceWeasel” although it is “FireFox” in reality with just a different brand name. It will tell you that your browser is not supported or recognized or something like that. What do you do in cases like these? Should you just give up? No, you should not. I will show you a trick by which you can make your “IceWeasel” appear as “FireFox” to websites like Alexa.

Step 1: Open your IceWeasel Web-browser

Now type:

about:config

in your URL/Address bar like this:

about:config

and click on “I’ll be careful, I promise!” button.

Step 2: Search for Iceweasel in the filter bar

You will see a “Filter” bar just below the URL/Address bar. Now type “iceweasel” (with quotes) in it and you will see some results in the result box below it as show below:

general.useragent.extra.firefox

What you see above is very similar to Windows Registry key concept.

Step 3: Modify the string value

Now select the entry with name similar to “general.useragent.extra.firefox” and right-click on it and select “Modify”. An input box will appear with value set to:
"Iceweasel/3.0.6"

will appear as shown above. Now change this to something like:
"Firefox/3.0.6"

reanme to Firefox

Click “OK”.

Now close your browser and again open your web browser. Try going to the Alexa website and now it will welcome your browser (which it thinks as Firefox) and will allow you to download the toolbar.

Happy Firefoxing with Iceweasel!


Add Digg To Blogger.com Posts

After a poking around the xml template for my site, and playing around with the Digg code, I've figured out how to add a Digg button to my posts.
Listed below is a quick how-to for adding this functionality to your website too.


  1. Make sure your blog is set to enable Post Pages and you are using the new Blogger, not the classic version

  2. Go to the customization section of your blog, on the Template tab select Edit HTML

  3. Make sure to click Download Full Template and save a backup copy to disk - just in case.

  4. Put a check in Expand Widget Templates.

  5. Search for


    Update: March 5th, 2009

    It has come to my attention that some templates use a slightly different formatting. If you cannot find the text listed above - try searching for and follow the rest of the instructions.

  6. Paste this on the line directly before it:


Archive

Recent Posts

Recent Comments

TechCrunch