Saturday, April 19, 2008

Installing rkhunter

What is a rootkit?

A rootkit is a program that runs on a *nix-based OSes, that allows a remote user to execute certain code or commands. There are many different types of rootkits. Some mount themselves among legit daemons and "hide" themselves often reporting results, output, or data to a remote server. Most rootkits I've seen aren't destructive. They are malicious in nature because they use your server as zombie or bot. If you somehow encounter a really bad rootkit, it could allow a hacker remote access (ssh or telnet) with full root privileges.

What does rkhunter do?


Rkhunter is much like a virus scanner for a Windows system. It has definitions to help identify rootkits and reports them. Just like anything, rkhunter isn't 100%, but it weeds out the majority of rootkits. Upon running rkhunter, various system files, conf files, and bin directories are examined. The results are cross-referenced against the results of infected systems (from the defintions) and the results are compiled. This is where *nix systems really shine. While your OS may vary, and how it's compiled or configured, the file system and configuration is basically the same. This allows programs like rkhunter to provide results with a fairly small window for error or false positive.

Installing rkhunter

Just like all the other packages for *nix, you'll have to download it's tar file from their website. Sometimes I mirror packages on this site, but because this one changes often I'm not going to do that. You can find the latest version from the rkhunter websites (rootkit.nl). Obviously you have root privileges to install this. Here we go:

debian:~# wget http://superb-west.dl.sourceforge.net/sourceforge/rkhunter/rkhunter-1.3.0tar.gz
-- 15:27:36--  http://superb-west.dl.sourceforge.net/sourceforge/rkhunter/rkhunter-1.3.0tar.gz
          => `rkhunter-1.3.0.tar.tar'
Resolving superb-west.dl.sourceforge.net... 209.160.69.253
Connecting to superb-west.dl.sourceforge.net|209.160.69.253|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 126,314 (123K) [application/x-tar]

100%[================================================================================>] 126,314 259.19K/s

15:27:36 (258.38 KB/s) - `rkhunter-1.3..tar.tar' saved [126314/126314]

Version 1.3.0 is the latest stable version but do check with the rkhunter home page to see if a newer version is available .

Comparing downloaded file with md5sum signature

Being good sysadmins we want to check the md5sum of the downloaded file before extracting it and installing it.

To find the md5 signature of the downloaded package:


debian:~# md5sum rkhunter-1..0.tar.gz

Compare this with the signature available on the Debian package list - ensure you look at the original download and not the diff patch that Debian applies.

Extracting

Once happy, extract the source code:

debian:~# tar -xzvf rkhunter*.tar
./rkhunter/files/
./rkhunter/files/CHANGELOG
./rkhunter/files/LICENSE
./rkhunter/files/README
./rkhunter/files/WISHLIST
./rkhunter/files/backdoorports.dat
./rkhunter/files/check_modules.pl
./rkhunter/files/check_port.pl
./rkhunter/files/defaulthashes.dat
./rkhunter/files/filehashmd5.pl
./rkhunter/files/filehashsha1.pl
./rkhunter/files/mirrors.dat
./rkhunter/files/os.dat
./rkhunter/files/rkhunter
./rkhunter/files/rkhunter.conf
./rkhunter/files/rkhunter.spec
./rkhunter/files/showfiles.pl
./rkhunter/files/md5blacklist.dat
./rkhunter/files/tools/
./rkhunter/files/tools/update_server.sh
./rkhunter/files/tools/update_client.sh
./rkhunter/files/tools/README
./rkhunter/files/check_update.sh
./rkhunter/files/programs_bad.dat
./rkhunter/files/contrib/
./rkhunter/files/contrib/run_rkhunter.sh
./rkhunter/files/contrib/README.txt
./rkhunter/files/testing/
./rkhunter/files/testing/stringscanner.sh
./rkhunter/files/testing/rootkitinfo.txt
./rkhunter/files/testing/rkhunter.conf
./rkhunter/files/development/
./rkhunter/files/development/createfilehashes.pl
./rkhunter/files/development/createhashes.sh
./rkhunter/files/development/rpmhashes.sh
./rkhunter/files/development/rpmprelinkhashes.sh
./rkhunter/files/development/osinformation.sh
./rkhunter/files/development/rkhunter.8
./rkhunter/files/development/createhashesall.sh
./rkhunter/files/development/search_dead_sysmlinks.sh
./rkhunter/files/programs_good.dat
./rkhunter/installer.sh
debian:~# cd rkhunter-1.3.0
debian:~/rkhunter-1.3.0# ls
files installer.sh
debian:~/rkhunter-1.3.0# ./installer.sh
Rootkit Hunter installer 1.2.4 (Copyright 2003-2005, Michael Boelen)
---------------
Starting installation/update

Checking /usr/local... OK
Checking file retrieval tools... /usr/bin/wget
Checking installation directories...
- Checking /usr/local/rkhunter...Created
- Checking /usr/local/rkhunter/etc...Created
- Checking /usr/local/rkhunter/bin...Created
- Checking /usr/local/rkhunter/lib/rkhunter/db...Created
- Checking /usr/local/rkhunter/lib/rkhunter/docs...Created
- Checking /usr/local/rkhunter/lib/rkhunter/scripts...Created
- Checking /usr/local/rkhunter/lib/rkhunter/tmp...Created
- Checking /usr/local/etc...Exists
- Checking /usr/local/bin...Exists
Checking system settings...
- Perl... OK
Installing files...
Installing Perl module checker... OK
Installing Database updater... OK
Installing Portscanner... OK
Installing MD5 Digest generator... OK
Installing SHA1 Digest generator... OK
Installing Directory viewer... OK
Installing Database Backdoor ports... OK
Installing Database Update mirrors... OK
Installing Database Operating Systems... OK
Installing Database Program versions... OK
Installing Database Program versions... OK
Installing Database Default file hashes... OK
Installing Database MD5 blacklisted files... OK
Installing Changelog... OK
Installing Readme and FAQ... OK
Installing Wishlist and TODO... OK
Installing RK Hunter configuration file... OK
Installing RK Hunter binary... OK
Configuration updated with installation path (/usr/local/rkhunter)

Installation ready.
See /usr/local/rkhunter/lib/rkhunter/docs for more information. Run 'rkhunter' (/usr/local/bin/rkhunter)

Running rkhunter

Well that's it! As you can see I downloaded the package using wget, unpacked it, and installed it using a shell script. Now that it's installed let's run it! The results below are from a non-production RHEL3 box.
debian:~/rkhunter-1.3.0# rkhunter -c
Rootkit Hunter 1.3.0 is running

Determining OS... Unknown
Warning: This operating system is not fully supported!
All MD5 checks will be skipped!


Checking binaries
* Selftests
Strings (command) [ OK ]


* System tools
Skipped!


Check rootkits
* Default files and directories
Rootkit '55808 Trojan - Variant A'... [ OK ]
ADM Worm... [ OK ]
Rootkit 'AjaKit'... [ OK ]
Rootkit 'aPa Kit'... [ OK ]
Rootkit 'Apache Worm'... [ OK ]
Rootkit 'Ambient (ark) Rootkit'... [ OK ]
Rootkit 'Balaur Rootkit'... [ OK ]
Rootkit 'BeastKit'... [ OK ]
Rootkit 'beX2'... [ OK ]
Rootkit 'BOBKit'... [ OK ]
Rootkit 'CiNIK Worm (Slapper.B variant)'... [ OK ]
Rootkit 'Danny-Boy's Abuse Kit'... [ OK ]
Rootkit 'Devil RootKit'... [ OK ]
Rootkit 'Dica'... [ OK ]
Rootkit 'Dreams Rootkit'... [ OK ]
Rootkit 'Duarawkz'... [ OK ]
Rootkit 'Flea Linux Rootkit'... [ OK ]
Rootkit 'FreeBSD Rootkit'... [ OK ]
Rootkit 'Fuck`it Rootkit'... [ OK ]
Rootkit 'GasKit'... [ OK ]
Rootkit 'Heroin LKM'... [ OK ]
Rootkit 'HjC Kit'... [ OK ]
Rootkit 'ignoKit'... [ OK ]
Rootkit 'ImperalsS-FBRK'... [ OK ]
Rootkit 'Irix Rootkit'... [ OK ]
Rootkit 'Kitko'... [ OK ]
Rootkit 'Knark'... [ OK ]
Rootkit 'Li0n Worm'... [ OK ]
Rootkit 'Lockit / LJK2'... [ OK ]
Rootkit 'MRK'... [ OK ]
Rootkit 'Ni0 Rootkit'... [ OK ]
Rootkit 'RootKit for SunOS / NSDAP'... [ OK ]
Rootkit 'Optic Kit (Tux)'... [ OK ]
Rootkit 'Oz Rootkit'... [ OK ]
Rootkit 'Portacelo'... [ OK ]
Rootkit 'R3dstorm Toolkit'... [ OK ]
Rootkit 'RH-Sharpe's rootkit'... [ OK ]
Rootkit 'RSHA's rootkit'... [ OK ]
Sebek LKM [ OK ]
Rootkit 'Scalper Worm'... [ OK ]
Rootkit 'Shutdown'... [ OK ]
Rootkit 'SHV4'... [ OK ]
Rootkit 'SHV5'... [ OK ]
Rootkit 'Sin Rootkit'... [ OK ]
Rootkit 'Slapper'... [ OK ]
Rootkit 'Sneakin Rootkit'... [ OK ]
Rootkit 'Suckit Rootkit'... [ OK ]
Rootkit 'SunOS Rootkit'... [ OK ]
Rootkit 'Superkit'... [ OK ]
Rootkit 'TBD (Telnet BackDoor)'... [ OK ]
Rootkit 'TeLeKiT'... [ OK ]
Rootkit 'T0rn Rootkit'... [ OK ]
Rootkit 'Trojanit Kit'... [ OK ]
Rootkit 'Tuxtendo'... [ OK ]
Rootkit 'URK'... [ OK ]
Rootkit 'VcKit'... [ OK ]
Rootkit 'Volc Rootkit'... [ OK ]
Rootkit 'X-Org SunOS Rootkit'... [ OK ]
Rootkit 'zaRwT.KiT Rootkit'... [ OK ]

* Suspicious files and malware
Scanning for known rootkit strings [ OK ]
Scanning for known rootkit files [ OK ]
Testing running processes... [ OK ]
Miscellaneous Login backdoors [ OK ]
Miscellaneous directories [ OK ]
Software related files [ OK ]
Sniffer logs [ OK ]

* Trojan specific characteristics
shv4
Checking /etc/rc.d/rc.sysinit
Test 1 [ Clean ]
Test 2 [ Clean ]
Test 3 [ Clean ]
Checking /etc/inetd.conf [ Not found ]
Checking /etc/xinetd.conf [ Clean ]

* Suspicious file properties
chmod properties
Checking /bin/ps [ Clean ]
Checking /bin/ls [ Clean ]
Checking /usr/bin/w [ Clean ]
Checking /usr/bin/who [ Clean ]
Checking /bin/netstat [ Clean ]
Checking /bin/login [ Clean ]
Script replacements
Checking /bin/ps [ Clean ]
Checking /bin/ls [ Clean ]
Checking /usr/bin/w [ Clean ]
Checking /usr/bin/who [ Clean ]
Checking /bin/netstat [ Clean ]
Checking /bin/login [ Clean ]

* OS dependant tests

Linux
Checking loaded kernel modules... [ OK ]
Checking files attributes [ OK ]
Checking LKM module path [ OK ]


Networking
* Check: frequently used backdoors
Port 2001: Scalper Rootkit [ OK ]
Port 2006: CB Rootkit [ OK ]
Port 2128: MRK [ OK ]
Port 14856: Optic Kit (Tux) [ OK ]
Port 47107: T0rn Rootkit [ OK ]
Port 60922: zaRwT.KiT [ OK ]

* Interfaces
Scanning for promiscuous interfaces [ OK ]
System checks
* Allround tests
Checking hostname... Found. Hostname is roswell
Checking for passwordless user accounts... OK
Checking for differences in user accounts... OK. No changes.
Checking for differences in user groups... OK. No changes.
Checking boot.local/rc.local file...
- /etc/rc.local [ OK ]
- /etc/rc.d/rc.local [ OK ]
- /usr/local/etc/rc.local [ Not found ]
- /usr/local/etc/rc.d/rc.local [ Not found ]
- /etc/conf.d/local.start [ Not found ]
- /etc/init.d/boot.local [ Not found ]
Checking rc.d files...
Processing........................................
........................................
........................................
........................................
........................................
........................................
........................................
........................................
........................................
........................................
........................................
...................................
Result rc.d files check [ OK ]
Checking history files
Bourne Shell [ OK ]

* Filesystem checks
Checking /dev for suspicious files... [ OK ]
Scanning for hidden files... [ OK ]

Application advisories
* Application scan
Checking Apache2 modules ... [ Not found ]
Checking Apache configuration ... [ OK ]

* Application version scan
- GnuPG 1.2.1 [ Old or patched version ]
- Apache 2.0.46 [ Old or patched version ]
- Bind DNS 9.2.4 [ OK ]
- OpenSSL 0.9.7a [ Old or patched version ]
- PHP 4.3.2 [ Old or patched version ]
- Procmail MTA 3.22 [ OK ]
- OpenSSH 3.6.1p2 [ Old or patched version ]



Security advisories
* Check: Groups and Accounts
Searching for /etc/passwd... [ Found ]
Checking users with UID '0' (root)... [ OK ]

* Check: SSH
Searching for sshd_config...
Found /etc/ssh/sshd_config
Checking for allowed root login... Watch out Root login possible. Possible risk!
info:
Hint: See logfile for more information about this issue
Checking for allowed protocols... [ OK (Only SSH2 allowed) ]

* Check: Events and Logging
Search for syslog configuration... [ OK ]
Checking for running syslog slave... [ OK ]
Checking for logging to remote system... [ OK (no remote logging) ]


If you want to skip the interactive but add the -sk option at the end:

debian:~/rkhunter-1.3.0# /usr/local/bin/rkhunter -c -sk

Results and Conclusion

Upon running the program, the results are compiled and displayed. They will be somewhat arbitrary because of different OSes, configurations and kernel builds. However, the action of detecting root kits and backdoors still works. As I mentioned above, this is a MUST if you adminster and *nix boxes that touch the internet. Rootkits are often the worst type of compromise possible. Most of them are designed to infect your OS, and do what it's designed to do, with minimal detection. Don't make the mistake of waiting to harden and audit your OS! You won't enjoy the aftermath because you didn't take the few hours to setup your precautionary methods before green lighting your production machines.

Configuration

You may have configured your Slice in a way that triggers warnings from rkhunter.

Firstly, I would say listen to what it says and decide if you really need something that is a security risk and, secondly, if you do want the risk, there are ways of configuring rkhunter so it ignores certain things.

Hey?

Here's an example. Let's say I ran rkhunter and got this message:

Checking for allowed root login... Watch out Root login possible. Possible risk!
info: "PermitRootLogin yes" found in file /etc/ssh/sshd_config
Hint: See logfile for more information about this issue

That's fairly straight forward: I left the "PermitRootLogin" set to "yes" in my sshd_config file.

Now we know that's a silly thing to do and it's a nice reminder to tighten up our SSH configuration.

But let's say we do want to enable root logins via SSH but don't want a warning every time we run rkhunter.

Enter /usr/local/etc/rkhunter.conf. Open it up:

debian:~# sudo nano /usr/local/etc/rkhunter.conf

Scan down until you reach this line:

#ALLOW_SSH_ROOT_USER=0

Uncomment the line and change the 0 to a 1

ALLOW_SSH_ROOT_USER=1

Now when we run rkhunter there are no highlighted warnings and this message:

Checking for allowed root login...  [ OK (Remote root login permitted by explicit option) ]

Now it's says root logins are OK, but specifies why it's OK: You explicitly allowed it.

However, please don't allow root logins. Thanks.

Automation

Lastly, we know that automation and email notification make an administrator's life a lot easier, so now we can add rkhunter to a cronjob.

This is straight from the rkhunter website: You need to create a short shell script as follows:

#!/bin/sh

( /usr/local/bin/rkhunter --versioncheck
/usr/local/bin/rkhunter --update
/usr/local/bin/rkhunter --cronjob --report-warnings-only
) | /usr/bin/mail -s "rkhunter output" admin@yourdomain.com

Save the file and call it something like 'rkhunterscript'. Make the file executable:

debian:~# chmod 750 rkhunterscript

and place in your local bin folder or in a public bin folder. Now set a root cronjob as follows:

debian:~# sudo crontab -e

My cronjob looks like this:

10 3 * * * /home/demo/bin/rkhunterscript -c --cronjob

This will run the script at 3.10am each day. Why 3.10am? Well, I have chkrootkit running at 3.00am, I'd like that to finish before starting this one.

Howto: Printer Sharing with CUPS

This HOWTO describe how I set up printer sharing between machines running Debian GNU/ Linux 4.0 aka "Etch" using the Common Unix Printer System, known as "CUPS". The setup in this HOWTO is comprised by one headless server with no desktop software and a number of KDE workstations. CUPS is a complex piece of software so the method described herein is just one I found to work. There are probably several other ways to achieve the same result. This HOWTO require editing of vital system files, make sure to back up the original before making any changes.
This HOWTO is licensed under the terms of the GNU Free Documentation License. It has been written in the hope that it will be useful to the community but it comes with no warranty; use it at your own risk.

This HOWTO assumes:
- That you have the proper drivers/ software for your printer installed on your server. A very good site for finding drivers or Linux compatible printers is http://www.linuxprinting.org/ The Gutenprint, Gimp-print, Foomatic etc is available through apt-get but be aware that they might be lagging a bit behind so if you got a brand new model you printer might not be supported. For clarity I will use my own printer, a Epson Stylus C48, and the Gutenprint driver as an example. Also I will use as sample IP addresses 192.168.0.100 for the server and 192.168.0.150 for administration workstation.

- That you have a working network with SSH

- Some basic knowledge about TCP/IP networking

Setting up the printer server:
The first thing you need to do is to install CUPS:
Code:
apt-get install cupsys cupsys-driver-gutenprint foomatic-db-gutenprint foomatic-filters fontconfig libtiff4 libfreetype6

Note: You might need various other packages and libraries depending on your printing needs.

If your network use DHCP it's a good idea to set up your server to use static IP. As root, open the file /etc/network/interfaces in a text editor, locate the "# The primary network interface" and change it to:

Code:
# The primary network interface
auto eth0
iface eth0 inet static
address 192.168.0.100
netmask 255.255.255.0
network 192.168.0.0
broadcast 192.168.0.255
gateway 192.168.0.254

Note: Change the values to suit your network setup. You might also want to do the same on the administrator machine.
Save the new configuration and restart the network:
Code:
/etc/init.d/networking restart


Configure CUPS
Now we need to configure CUPS. In an editor, as root, open /etc/cups/cupsd.conf

First, check the encryption setting and change it to (if it's not there you should add it):
Code:
# Encryption
Encryption IfRequested


Then we need to tell it to listen for the server it self:
Code:
# Only listen for connections from the local machine.
Listen localhost:631
Listen 192.168.0.100
Listen /var/run/cups/cups.sock

Note: This seem unlogical and I have no explanation for it, but it simply did not work for me when using just 'localhost'.

We need it to be visible to the entire network:
Code:
# Show shared printers on the local network.
Browsing On
BrowseOrder allow,deny
BrowseAllow @LOCAL


We also have to tell it what machines that may access the server:
Code:
# Restrict access to the server...

Order allow,deny
Allow localhost
Allow 192.168.0.*

Here you can specify each IP or, as in my example, use "*" as a wildcard.

We don't want every user to be able to use the admin web interface so we tell the server what machines are allowed:
Code:
# Restrict access to the admin pages...

Encryption IfRequested
Order allow,deny
Allow localhost
Allow 192.168.0.150


And the same for the configuration files:
Code:

AuthType Basic
Require user @SYSTEM
Order allow,deny
Allow localhost
Allow 192.168.0.150


The rest of the configuration I left as default:
Code:
# Set the default printer/job policies...

# Job-related operations must be done by the owner or an administrator...

Require user @OWNER @SYSTEM
Order deny,allow


# All administration operations require an administrator to authenticate...

AuthType Basic
Require user @SYSTEM
Order deny,allow


# Only the owner or an administrator can cancel or authenticate a job...

Require user @OWNER @SYSTEM
Order deny,allow



Order deny,allow



#
#

To apply the new configuration we need to save the file and then restart CUPS:
Code:
/etc/init.d/cupsys restart

You should now be able to connect to the CUPS web interface from the administrator workstation (IP 192.168.0.150 in my example) by pointing your web browser at http://192.168.0.100:631/
Then, if you got the correct drivers installed and your printer is connected to the server, it's just a matter of adding your printer under the "Administration" tab. The web interface is fairly self explanatory with good help sections, and also the setup will vary depending on the printer etc so I will not go into details about it here.
Finish the setup and verify that it works by printing the CUPS test page.

Setting up the CUPS clients:
The CUPS clients are easy to set up and the config is identical on all machines.
First install the CUPS client packages:
Code:
# apt-get install cupsys cupsys-client


Then create, as root, the file /etc/cups/client.conf
This file is really simple and contains only two lines: the server IP and the encryption requirement:

Code:
# Servername
ServerName 192.168.0.100

# Encryption
Encryption IfRequested

Save the file, then restart the client:
Code:
/etc/init.d/cupsys restart

The last thing we need to do is to set our printer as the standard printer in KDE.:
    Open Controllcenter --> Peripherals --> Printers
    Click "Administrator mode"
    Select "CUPS (Common UNIX Printing System)"

That's it! KDE will route all printing through your freshly set up CUPS client.

A personal note: KDE has a feature which should be able to set up both the CUPS server and client but I could not get this to work. The idea is that KDE can create the config files, but it looked like the KDE setup program was way behind the CUPS package because it kept complaining about unknown setup options and I always ended up with weird configuration files. By creating the config files manually you bypass that problem entirely.

Wednesday, April 9, 2008

Howto upgrade Debian 3.1 Sarge to Debian 4.0 Etch stable

Debian 4.0 has been released. It is recommended that you upgrade the system to latest version. Upgrading remote Debian server is a piece of cake :)

Currently many of our boxes are powered by Debian 3.1 Sarge. For example typical web server may have following packages only:
=> Apache
=> PHP
=> Postfix and other mail server software
=> Iptables and backup scripts
=> MySQL 5.x etc

Procedure

Following are essential steps to upgrade your system:
1. Verify current system
2. Update package list
3. Update distribution
4. Update /etc/apt/sources.list file
5. Reboot system
6. Test everything is working
Backup your system

Before upgrading your Debian systems make sure you have backup (I’m assuming that you make backup copies of all important data/files everyday:):

1. User data / files / Emails (/home, /var/www etc)
2. Important system files and configuration file stored in /etc
3. MySQL and other database backup
4. Backup Installed package list [Get list of installed software for reinstallation / restore software]
Step # 1: Verify current system

File /etc/debian_version stores current Debian version number :
$ cat /etc/debian_version
Output:

3.1

Find out kernel version
$ uname -mrs
Output:

Linux 2.6.8-3-386 i686

Step #2: Update package list

Use apt-get command:
# apt-get update
Step #3 : Update distribution

Pass dist-upgrade option to apt-get command. This will upgrade Sarge to Etch. dist-upgrade' in addition to performing the function of upgrade, also intelligently handles changing dependencies with new versions of packages; apt-get has a "smart" conflict resolution system, and it will attempt to upgrade the most important packages at the expense of less important ones if necessary.
# apt-get dist-upgrade

This upgrade procedure takes time. Depend upon installed softwares and other factors such as network-speed you may need to wait from 10 minutes to 1+ hour(s).
Step #4 : Update /etc/apt/sources.list file

There seems to be a small bug in upgrade procedure. You need to manually update Debian security source line. You will see an error as follows:
W: Conflicting distribution: http://security.debian.org stable/updates Release (expected stable but got sarge)
W: You may want to run apt-get update to correct these problems

Just open /etc/apt/sources.list file:
# vi /etc/apt/sources.list
Find line that read as follows:
deb http://security.debian.org/ stable/updates main contrib
Replace with :
deb http://security.debian.org/ etch/updates main contrib non-free

Save and close the file. Now type the following command:
# apt-get update
Step #5: Reboot system

You are done. Just reboot the system:
# reboot
Step #6: Make sure everything is working...

See Debian distro version:
$ cat /etc/debian_version
Output:

4.0

Make sure all services are running, just go thought all log files once.
# netstat -tulpn
# tail -f /var/log/log-file-name
# less /var/log/dmesg
# top
....
...
....

Use apt-key command to manage the list of keys used by apt to authenticate packages. Packages which have been authenticated using these keys will be considered trusted. Make sure you see package etch related keys:
# apt-key list

/etc/apt/trusted.gpg
--------------------
pub 1024D/2D230C5F 2006-01-03 [expired: 2007-02-07]
uid Debian Archive Automatic Signing Key (2006)

pub 1024D/6070D3A1 2006-11-20 [expires: 2009-07-01]
uid Debian Archive Automatic Signing Key (4.0/etch)

pub 1024D/ADB11277 2006-09-17
uid Etch Stable Release Key

If there is a problem use following command to update the local keyring with the keyring of Debian archive keys and removes from the keyring the archive keys which are no longer valid.
# apt-key update
# apt-key list

Finally just see if any new updates/security updates are available:
# apt-get update
# apt-get upgrade
Further readings

* Above instructions is server specific only. I’ve tested them on 3 different production web servers. The release notes for Debian GNU/Linux 4.0 ("etch"), Intel x86 has additional information about Upgrades from previous releases including special information about Debian Linux desktop system and other troubleshooting hints.

Tuesday, April 8, 2008

Debian Guide

This is a unofficial Debian Guide. This is a Guide to Debian 3.1 aka Sarge. This is the latest stable release. This guide is designed for all, and is based off the Ubuntu Starter Guide - except it's for Debian, because Debian is far better. This Guide is basically how to tailor make your Debian install to be as good as (and better than) Ubuntu Linux.

This guide is a guide for information found through using Debian - so it's mainly custom solutions, applications and extra things to make Debian even better.

Introduction

What is Debian?

Debian is a free operating system (OS) for your computer. An operating system is the set of basic programs and utilities that make your computer run. Debian uses the Linux kernel (the core of an operating system), but most of the basic OS tools come from the GNU project; hence the name GNU/Linux.

Debian GNU/Linux provides more than a pure OS: it comes with over 15490 packages, precompiled software bundled up in a nice format for easy installation on your machine.

For more information, visit: http://www.debian.org/intro/about

How can I get Debian?

There are many ways. The best place to start is here: http://www.debian.org/distrib/

Where can I download Debian?

You can download Debian and burn to CD:

http://www.debian.org/CD/

Where can I find further Debian help?

There are many places around the web. Here is a short list:

*

http://www.debian.org/support
*

http://www.debian-administration.org/
*

http://www.debianhelp.org/
*

http://planet.debian.net/

Installation

Basic Installing

TBC.

Post-Reboot Setup

TBC.

Installing Core Packages

TBC.

Installing the X-Window-System and GNOME

TBC.

Installing Nifty Extras

TBC.

Further Applications

There are many further applications I have not covered above. They are thus documented just below, each is of course optional. Each is free (but not all below are open source).

Installing Applications & Updating Debian

What is Apt?

Apt is a core tool inside Debian. Apt makes it possible to:

* Install applications
* Remove applications
* Keep your applications up to date

Apt works with dpkg, another tool, which handles the actual installation and removal of packages (applications). Apt is very powerful, and can be used on the command line (console/terminal), and there are many GUI/Graphical tools to let you use Apt without having to touch the command line.

The documentation which follows is broken down into how you want to configure and use apt (either via the command line, or the graphical manager - synaptic).

Apt Command Line Tools

Configuring Apt

Apt downloads and installs, updates (and removes) packages (applications) from your debian. operating system. You can configure Apt to use a source (or multiple sources) to get these packages from. There are many sources - web (HTTP) servers, FTP servers, CD-ROM disks, network servers (etc). To configure apt from the command

Use apt-setup

Apt-setup lets you add an extra source to your Apt configuration.

1. Open a root console window (Applications -> System Tools-> Root Terminal in GNOME)
2. Type "apt-setup"
3. Follow the Wizard!

Edit sources.list

It's best to add new sources to Apt by using the apt-setup tool (see above). But, you may want to remove old source, and add your own directly:

1. Open a root console window (Applications -> System Tools-> Root Terminal in GNOME)
2. Type "nano /etc/apt/sources.list" (for a console editor) or "gedit /etc/apt/sources.list" (for a graphical editor)
3. Edit the sources file!
4. For help with the file, exit Nano (Ctrl+X) or gedit (Alt+F4) and type "man sources.list"

Installing an Application

1. Open a root console window (Applications -> System Tools-> Root Terminal in GNOME)
2. Type "apt-get install " where package is the name of the package (application) you want to install.

Removing an Application

1. Open a root console window (Applications -> System Tools-> Root Terminal in GNOME)
2. Type "apt-get remove " where package is the name of the package (application) you want to remove.

Updating an Application

1. Open a root console window (Applications -> System Tools-> Root Terminal in GNOME)
2. Type "apt-get update " where package is the name of the package (application) you want to update.

Keeping your system up-to-date

1. Open a root console window (Applications -> System Tools-> Root Terminal in GNOME)
2. Type "apt-get update".
3. Type "apt-get dist-upgrade"

Search for applications

1. Open a console window (Applications -> System Tools-> Terminal in GNOME)
2. Type "apt-cache search " where is the pattern to search for.

You may want to pipe the output (redirect the output) into "less" (a scrollable viewer) since the list may be huge:

apt-cache search | less

List installed packages

1. Open a console window (Applications -> System Tools-> Terminal in GNOME)
2. Type "dpkg --list"
3. You may want to pipe (redirect) that to a program called "less" since the list will be long (type "dpkg --list | less")

Find what package a binary belongs to

This is a really neat function of dpkg. Basically, if you want to find out what debian package a particular binary belongs to, do the following:

1. Open a console window (Applications -> System Tools-> Terminal in GNOME)
2. Type "dpkg -S /bin/foo" where /bin/foo is the full path to the binary

Simulate Upgrades

With apt-get you can simulate an upgrade - that is - show which packages would be installed if you did upgrade.

1. Open a root console window (Applications -> System Tools-> Root Terminal in GNOME)
2. Type "apt-get -s upgrade"

Delete used package files

If you want to delete the packages you've already installed applications from (via apt-get install) then you can do the following (and retrieve a lot of disk space!):

1. Open a root console window (Applications -> System Tools-> Root Terminal in GNOME)
2. Type "apt-get clean"

apt-spy

apt-spy will generate a sources.list file (the configuration file for apt package sources) for you! It measures the latency and bandwidth to servers, and picks the best one.

To get started, you'll need to install it, and then read how to use it:

1. Open a root console window (Applications -> System Tools-> Root Terminal in GNOME)
2. Type "apt-get install apt-spy"
3. Read about how to use apt-spy: type "man apt-spy"

configure packages

When packages are installed, you are asked to configure them via a wizard (note: most packages don't require configuration). To reconfigure packages, do this:

1. Open a root console window (Applications -> System Tools-> Root Terminal in GNOME)
2. Type "dpkg-reconfigure " where package is the name of the package

Graphical Apt Tool

There is a tool called Synaptic which lets you use all of the power of apt from one tool.

First, you must install it:

1. Open a root console window (Applications -> System Tools-> Root Terminal in GNOME)
2. Type "apt-get install synaptic".

Configuring Apt

1. Open Synaptic (Applications -> System Tools-> Synaptic Package Manager in GNOME)
2. Click the Settings Menu, and choose Repositories.
3. Configure!

Browsing, Installing, Removing

First - open Synaptic (Applications -> System Tools-> Synaptic Package Manager in GNOME)

Synaptic shows you all the packages available to you - and marks each one as installed or not installed. You can now navigate and find packages, marking packages you want to install (or remove) by clicking the tick box, and then click "Apply" to make changes.

Extra (Non-Standard) Applications

Network / Internet

Web Browsers

There are many web-browsers available in Debian GNU/Linux. Epiphany - the GNOME web-browser will have been installed if you have followed the "Install" part of this guide, or if you have installed GNOME.

Firefox

Firefox is perhaps the best web-browser available. It is highly recommended. To install:

1. Open a root console window (Applications -> System Tools-> Root Terminal in GNOME)
2. Type "apt-get install mozilla-firefox"

For more information: http://www.mozilla.org/products/firefox/

Galeon

Galeon is a lightweight web-browser designed for GNOME. It uses the same web-page rendering engine as Firefox.

1. Open a root console window (Applications -> System Tools-> Root Terminal in GNOME)
2. Type "apt-get install galeon"

For more information: http://galeon.sourceforge.net/

Mozilla Suite

The Mozilla Suite is a cross-platform integrated Internet suite. It contains the Navigator (web browser), Communitcator (Mozilla Mail & Newsgroups), a web page developer (Mozilla Composer), an IRC-Client (ChatZilla) and an electronic adress book.

1. Open a root console window (Applications -> System Tools-> Root Terminal in GNOME)
2. Type "apt-get install mozilla"

For more information: http://www.mozilla.org/

Konqueror

The Konqueror is the standard web browser in KDE.

1. Open a root console window (Applications -> System Tools-> Root Terminal in GNOME)
2. Type "apt-get install konqueror"

For more information: http://www.konqueror.org/

Dillo

Dillo is a small (~350kb), minimalistic multi-platform web browser. It has no CSS, JavaScript and Frame support.

1. Open a root console window (Applications -> System Tools-> Root Terminal in GNOME)
2. Type "apt-get install dillo"

For more information: http://www.dillo.org/

Opera

Opera is a web browser and Internet suite developed by the Opera Software company. Opera handles common Internet-related tasks such as displaying web sites, sending and receiving e-mail messages, managing contacts, IRC online chatting, downloading files via BitTorrent, and reading web feeds. Opera is offered free of charge for personal computers and mobile phones, but for other devices it must be paid for.

1. Download Opera here: http://www.opera.com/download/index.dml?platform=linux
2. Open the downloaded .deb-file with "GDebi-Installer".
3. Install Opera

For more information: http://opera.com/
Mail Clients

Evolution will have been installed.

Thunderbird

Mozilla Thunderbird is a free, open source, cross-platform e-mail and news client developed by the Mozilla Foundation.

1. Open a root console window (Applications -> System Tools-> Root Terminal in GNOME)
2. Type "apt-get install thunderbird"

For more information: http://www.mozilla.com/thunderbird

Sylpheed

Sylpheed is an open source e-mail and news client licensed under the GPL. It offers easy configuration and an abundance of features. It stores mail in the MH Message Handling System. Sylpheed runs on both Unix-like systems such as Linux, BSD, and Mac OS X and Windows. It uses GTK+.

1. Open a root console window (Applications -> System Tools-> Root Terminal in GNOME)
2. Type "apt-get install sylpheed"

For more information: http://sylpheed.sraoss.jp/en/

mutt

Mutt is a text-based e-mail client for Unix-like systems. It was originally written by Michael Elkins in 1995 and released under the GNU General Public License. Initially it resembled elm. Now the program most similar to it may be the newsreader slrn.

Mutt supports most mail formats (notably both mbox and Maildir) and protocols (POP3, IMAP, etc). It also includes MIME support, notably full PGP/GPG and S/MIME integration.

1. Open a root console window (Applications -> System Tools-> Root Terminal in GNOME)
2. Type "apt-get install mutt"

For more information: http://www.mutt.org/

FTP/SFTP/SCP Clients

gFTP

nautilus

Peer2Peer

Azureus

BitTorrent

eMule Client (aMule)

Instant Messaging

gAIM

Skype

X-Chat

irssi + screen

Media Players

Beep Media Player

Xine

MPlayer

Totem (w/ Xine)

Real Player

VLC

XMMS

Plugins, Runtimes and Viewers

Java Runtime Environment

Mono Runtime Environment

Adobe/Macromedia Flash Player/Plugin

PDF Viewer

CD and DVD

CD and DVD Burner

Compression Apps

Zip

Rar

Editing and Development

CLI Editors

GUI Editors

IDE's

Eclipse

NetBeans

Office / Authoring Applications

Gnome Office

OpenOffice.org

Other

Diagram Editor (Dia)

Nvu

System Configuration

Services and Boot Up

BUM

Gnome System Tools

etc.

Partition Manager

gParted

Firewall

Firestarter

Misc

aterm

Fonts

Compiling Applications

gDesklets

User Administration

Hardware

Networking

Tips and Tricks

Installing the ClearLooks GNOME Theme

Using the XFCE Desktop Environment

Sound: Using ALSA and ESD

Network Services

Remote Desktop

Windows File Server

SSH Server

Installing an SSH server is surprisingly simple and will allow you to access programs and files on your computer from anywhere that has access to port 22 on your computer. It's all done over an encrypted connection so any data transferred is going to be completely secure. This does open you up a little to anyone who can guess your password being able to log in, so it's good practice to ensure that either your root password is fairly complex or you disable root logins from elsewhere altogether.

1. Open a root console window (Applications -> System Tools-> Root Terminal in GNOME)
2. Type "apt-get install ssh"
3. Most users can leave all the options as their defaults when asked.

In order to make your SSH server more secure and more useful, it's well worth editing your sshd_config file to disable root logins and allow X forwarding (so you can run graphical applications on your computer but displayed on another computer providing it is also running X, either as part of a Linux/Unix/BSD system or under Cygwin in Windows).

1. Still in your root terminal, type "nano /etc/ssh/sshd_config".
2. Press Ctrl+W (for "Where Is").
3. Type "PermitRoot" and press enter.
4. You should see the option "PermitRootLogin", change it from "yes" to "no".
5. Press Ctrl+W again.
6. Type "X11For" and press enter.
5. Change "X11Forwarding" from "no" to "yes".
6. Press Ctrl+X to exit, and Y to confirm that you want to save.
7. Type "/etc/init.d/ssh restart" to restart your SSH server and confirm the changes.

If you need to do root things over SSH, SSH in as your normal user and type "su".

DHCP Server

Database Server

Web Server

FTP Server
D

How To Add Search Engines To Firefox's Search Bar

The Firefox web browser comes with a small amount of default search engines in its Search Bar, located in the top right hand corner of the browser window. Firefox gives you the ability to easily add more search engines by selecting from a predefined list.

Here's How:

1. Open your Firefox web browser.
2. Locate the Search Bar, located in the top right hand corner of your browser window.
3. In the Search Bar, you will see an icon for whatever search engine is currently active. By default, it will be Google's "G". Directly to the right of this icon is a down arrow. Click on either the icon or the arrow.
4. A drop-down menu will now appear. Choose the option labeled either Add Engines... or Manage Search Engines..., located at the bottom of the menu.
5. You are now taken to the Firefox Search Engines page, located in the main section of your browser window. Listed on this page are over 20 search engines, each with a description. To add a particular search engine to your Search Bar, click on its name.
6. After clicking on the name, you will see a dialog box labeled Add Search Engine. In this box will be the name of the search engine chosen as well as the search engine's category. Click OK to add the search engine to your Search Bar.
7. These steps can be repeated to add other search engine choices.

Creating OpenSearch plugins for Firefox

OpenSearch

Firefox 2 supports the OpenSearch description format for search plugins. Plugins that use the OpenSearch description syntax are compatible with IE 7 and Firefox. Because of this, they are the recommended format for use on the web.

Firefox also supports additional search capabilities not included in the OpenSearch description syntax, such as search suggestions and the SearchForm element. This article will focus on creating OpenSearch-compatible search plugins that support these additional Firefox-specific features.

OpenSearch description files can also be advertised as described in Autodiscovery of search plugins, and can be installed programmatically as described in Adding search engines from web pages.


OpenSearch description file

The XML file describing a search engine is actually quite simple, following the basic template below. Sections in bold need to be customized based on the needs of the specific search engine plugin you're writing.



engineName
engineDescription
inputEncoding
data:image/x-icon;base64,imageData
method" template="searchURL">
paramName1" value="paramValue1"/>
...
paramNameN" value="paramValueN"/>

suggestionURL"/>
searchFormURL

ShortName
A short name for the search engine.
Description
A brief description of the search engine.
InputEncoding
The encoding to use for the data input to the search engine.
Image
Base-64 encoded 16x16 icon representative of the search engine. One useful tool that you can use to construct the data to place here can be found here: The data: URI kitchen.
Url
Describes the URL or URLs to use for the search. The method attribute indicates whether to use a GET or POST request to fetch the result. The template attribute indicates the base URL for the search query.
Note: Internet Explorer 7 does not support POST requests.
There are two URL types Firefox supports:
  • type="text/html" is used to specify the URL for the actual search query itself.
  • type="application/x-suggestions+json" is used to specify the URL to use for fetching search suggestions.
For either type of URL, you can use {searchTerms} to substitute the search terms entered by the user in the search bar. Other supported dynamic search parameters are described in OpenSearch 1.1 parameters.
For search suggestion queries, the specified URL template is used to fetch a suggestion list in JavaScript Object Notation (JSON) format. For details on how to implement search suggestion support on a server, see Supporting search suggestions in search plugins.

Image:SearchSuggestionSample.png

Param
The parameters that need to be passed in along with the search query, as key/value pairs. When specifying values, you can use {searchTerms} to insert the search terms entered by the user in the search bar.
Note: Internet Explorer 7 does not support this element.
SearchForm
The URL to go to to open up the search page at the site for which the plugin is designed to search. This provides a way for Firefox to let the user visit the web site directly.
Note: Since this element is Firefox-specific, and not part of the OpenSearch specification, we use the "moz:" XML namespace prefix in the example above to ensure that other user agents that don't support this element can safely ignore it.

Autodiscovery of search plugins

A web site that offers a search plugin can advertise it so that Firefox users can easily download and install the plugin.

To support autodiscovery, you simply need to add one line to the section of your web page:

searchTitle" href="pluginURL">

Replace the italicized items as explained below:

searchTitle
The name of the search to perform, such as "Search MDC" or "Yahoo! Search". This value should match your plugin file's ShortName.
pluginURL
The URL to the XML search plugin, from which the browser can download it.

If your site offers multiple search plugins, you can support autodiscovery for them all. For example:

http://www.mysite.com/mysiteauthor.xml">

http://www.mysite.com/mysitetitle.xml">

This way, your site can offer plugins to search both by author and by title as separate entities.

Troubleshooting Tips

If there is a mistake in your Search Plugin XML, you could run into errors when adding a discovered plugin in Firefox 2. The error message may not be entirely helpful, however, so the following tips could help you find the problem.

  • Be sure that your Search Plugin XML is well formed. You can check by loading the file directly into Firefox. Ampersands in the template URL need to be escaped with & and tags need to be closed with a trailing slash or matching end tag.
  • The xmlns attribute is important, without it you could get an error message indicating that "Firefox could not download the search plugin from: (URL)".
  • Note that you must include a text/html URL — search plugins including only Atom or RSS URL types (which is valid, but Firefox doesn't support) will also generate the "could not download the search plugin" error.
  • Remotely fetched favicons must not be larger than 10KB (see bug 361923).

In addition, the search plugin service provides a logging mechanism that may be of use to plugin developers. Use about:config to set the pref 'browser.search.log' to true. Logging information will appear in Firefox's Error Console (Tools->Error Console) when search plugins are added.

Add Ixquick or any engine to your Opera Browser

You can now easily add Ixquick to your Opera browser.
The Ixquick search engine will then appear in the drop-down menu next to the search bar inside your browser.

Instructions:

1. Go to http://www.ixquick.com.
2. Right-click on the Ixquick search box (on the left of the blue "search" button).
3. Select "Create search" from the context menu.
4. In the "Keyword" field, insert 'ix'.
5. Click "Details >>", and check "Use as default search engine".
6. Click "OK", and you can use Ixquick as your preferred search plugin.
7. If Ixquick does not appear as default in the search box, click on the little black arrow in the Opera search box (right-top in your browser) and select Ixquick.


After installing, choose Ixquick from the search box's drop-down menu. Now you're ready to search: just enter a search term and press Enter!

Adding Ixquick or any search engine to Firefox browser

FirefoxWith the fantastic Firefox plugin Add to Search Bar you can easily add any search engine or field to your Firefox Search Bar. This brief tutorial will show you how to use it to make your searching life that much easier.

that much easier.

add to Firefox search bar

  1. Start by installing the Add to Search Bar Firefox plugin. Like most FF plugins, you’ll need to restart Firefox in order to start using it.
  2. Once Firefox has restarted, go to a web page that you frequently visit (like say.. I don’t know.. Simplehelp.net). Right-click (ctrl-click for single-button Mac folks) in the search field and choose Add to Search Bar…
  3. add to Firefox search bar

  4. A small pop-up will appear asking you what you’d like to call this search entry. Give it a name, and if you’re not happy with the default icon, you can choose one from your hard drive (png, gif and jpg all seem to work). Click OK.
  5. add to firefox search bar

  6. And that search engine will now be available for you to use directly from the Firefox Search Bar.
  7. add to firefox search bar

  8. If at any point in time you want to remove some of your custom search entries (or any of the defaults), select the Search Bar drop-down and choose Manage Search Engines…
  9. add to firefox search bar

  10. Highlight the one(s) you want to delete, and click the Remove button. You can also re-order the search engines from here. Whichever entry you put at the top, will be your Firefox default.
  11. add to firefox search bar

  12. Some search engines that I’ve found very helpful to add are Gmail, Orkut, LinkedIn, Ixquick, Ask.com, Gigablast, Entireweb, AlltheWeb, AltaVista, Wikipedia, Facebook, etc. But of course, the sky is the limit :)

THE GRID:10000 times faster than BROADBAND ( Grid Computing )

Grid computing is a phrase in distributed computing which can have several meanings:

  • Multiple independent computing clusters which act like a "grid" because they are composed of resource nodes not located within a single administrative domain. (formal)
  • Offering online computation or storage as a metered commercial service, known as utility computing, computing on demand, or cloud computing.
  • The creation of a "virtual supercomputer" by using spare computing resources within an organization.
  • The creation of a "virtual supercomputer" by using a network of geographically dispersed computers. Volunteer computing, which generally focuses on scientific, mathematical, and academic problems, is the most common application of this technology.

These varying definitions cover the spectrum of "distributed computing", and sometimes the two terms are used as synonyms. This article focuses on distributed computing technologies which are not in the traditional dedicated clusters; otherwise, see computer cluster.

Functionally, one can also speak of several types of grids:

  • Computational grids (including CPU scavenging grids) which focuses primarily on computationally-intensive operations.
  • Data grids or the controlled sharing and management of large amounts of distributed data.
  • Equipment grids which have a primary piece of equipment e.g. a telescope, and where the surrounding Grid is used to control the equipment remotely and to analyze the data produced.

During 2007 the term cloud computing came into popularity. It is conceptually identical to the canonical Foster definition of grid computing below. In practice all clouds are grids, but not all grids manage a cloud.


Virtual Organizations accessing different and overlapping sets of resources


Image: Virtual Organizationsaccessing different and overlapping sets of resources

Grids versus conventional supercomputers

"Distributed" or "grid" computing in general is a special type of parallel computing which relies on complete computers (with onboard CPU, storage, power supply, network interface, etc.) connected to a network (private, public or the Internet) by a conventional network interface, such as Ethernet. This is in contrast to the traditional notion of a supercomputer, which has many processors connected by a local high-speed computer bus.

The primary advantage of distributed computing is that each node can be purchased as commodity hardware, which when combined can produce similar computing resources to a multiprocessor supercomputer, but at lower cost. This is due to the economies of scale of producing commodity hardware, compared to the lower efficiency of designing and constructing a small number of custom supercomputers. The primary performance disadvantage is that the various processors and local storage areas do not have high-speed connections. This arrangement is thus well-suited to applications in which multiple parallel computations can take place independently, without the need to communicate intermediate results between processors.

The high-end scalability of geographically dispersed grids is generally favorable, due to the low need for connectivity between nodes relative to the capacity of the public Internet. Conventional supercomputers also create physical challenges in supplying sufficient electricity and cooling capacity in a single location. Both supercomputers and grids can be used to run multiple parallel computations at the same time, which might be different simulations for the same project, or computations for completely different applications. The infrastructure and programming considerations needed to do this on each type of platform are different, however.

There are also some differences in programming and deployment. It can be costly and difficult to write programs so that they can be run in the environment of a supercomputer, which may have a custom operating system, or require the program to address concurrency issues. If a problem can be adequately parallelized, a "thin" layer of "grid" infrastructure can allow conventional, standalone programs to run on multiple machines (but each given a different part of the same problem). This makes it possible to write and debug programs on a single conventional machine, and eliminates complications due to multiple instances of the same program running in the same shared memory and storage space at the same time.

Design considerations and variations

One feature of distributed grids is that they can be formed from computing resources belonging to multiple individuals or organizations (known as multiple administrative domains). This can facilitate commercial transactions, as in utility computing, or make it easier to assemble volunteer computing networks.

One disadvantage of this feature is that the computers which are actually performing the calculations might not be entirely trustworthy. The designers of the system must thus introduce measures to prevent malfunctions or malicious participants from producing false, misleading, or erroneous results, and from using the system as an attack vector. This often involves assigning work randomly to different nodes (presumably with different owners) and checking that at least two different nodes report the same answer for a given work unit. Discrepancies would identify malfunctioning and malicious nodes.

Due to the lack of central control over the hardware, there is no way to guarantee that nodes will not drop out of the network at random times. Some nodes (like laptops or dialup Internet customers) may also be available for computation but not network communications for unpredictable periods. These variations can be accommodated by assigning large work units (thus reducing the need for continuous network connectivity) and reassigning work units when a given node fails to report its results as expected.

The impacts of trust and availability on performance and development difficulty can influence the choice of whether to deploy onto a dedicated computer cluster, to idle machines internal to the developing organization, or to an open external network of volunteers or contractors.

In many cases, the participating nodes must trust the central system not to abuse the access that is being granted, by interfering with the operation of other programs, mangling stored information, transmitting private data, or creating new security holes. Other systems employ measures to reduce the amount of trust "client" nodes must place in the central system such as placing applications in virtual machines.

Public systems or those crossing administrative domains (including different departments in the same organization) often result in the need to run on heterogeneous systems, using different operating systems and hardware architectures. With many languages, there is a tradeoff between investment in software development and the number of platforms that can be supported (and thus the size of the resulting network). Cross-platform languages can reduce the need to make this tradeoff, though potentially at the expense of high performance on any given node (due to run-time interpretation or lack of optimization for the particular platform).

Various middleware projects have created generic infrastructure, to allow diverse scientific and commercial projects to harness a particular associated grid, or for the purpose of setting up new grids. BOINC is a common one for academic projects seeking public volunteers; more are listed at the end of the article

CPU scavenging

CPU-scavenging, cycle-scavenging, cycle stealing, or shared computing creates a "grid" from the unused resources in a network of participants (whether worldwide or internal to an organization). Typically this technique uses desktop computer instruction cycles that would otherwise be wasted at night, during lunch, or even in the scattered seconds throughout the day when the computer is waiting for user input or slow devices.

Volunteer computing projects use the CPU scavenging model almost exclusively.

In practice, participating computers also donate some supporting amount of disk storage space, RAM, and network bandwidth, in addition to raw CPU power. Since nodes are apt to go "offline" from time to time, as their owners use their resources for their primary purpose, this model must be designed to handle such contingencies.