Sunday, February 21, 2010

Making a Ten-year Old Computer Useful with Linux

Even a computer that is nearly ten years old can be made useful again. A certain computer in my house has remained untouched for about five years, but today it found new life with Linux.

In May of 2005 we replaced one of our computers that had been in service for exactly five years. The system specifications were tired even then: 750MHz AMD Athlon processor, 256MB RAM, 20GB harddrive. It had been running Windows 98 when decommissioned. It was stored, moved to a new state, and stored some more.

Yesterday I continued working on the home network infrastructure and looked for a potential fileserver. I found this decade old machine, gave it power and peripherals, then booted it up. Slow as the system was, I was able to navigate the files and actually found over two years of digital photographs that were presumably lost on CDs that were unreadable.

After the recovery came the rejuvenation. Since this machine is to be a network workhorse it needed a reliable operating system with powerful tools readily available. It should be no surprise that I chose Linux, specifically the Ubuntu 9.10 distribution. Even on this old system the installation proceeded without problem, and the subsequent package updates finished with little problem.

Previous experience indicates that this would have been more costly and less flexible using Windows. Indeed, it would have been impossible to use the latest version of Windows (see http://windows.microsoft.com/systemrequirements) because the motherboard doesn't even support the RAM requirement.

While the system will need a RAM upgrade and a card to accept firewire connections to external hard drives, the machine will likely be one of the silent, steady sentinels in our home network.

For the technorati …

While gathering information about this system I discovered the dmidecode command. This tool needs to be run with administrator privileges. With it I was able to discover the CPU speed, CPU maker, RAM information, and even the build date.

sudo dmidecode -t processor
sudo dmidecode -t system
sudo dmidecode -t memory

An old article (http://www.linux.com/archive/feed/40412) describes the information from dmidecode as not completely reliable, but useful. I found it useful for my purposes.

Thursday, February 18, 2010

Fixing the monitor's low resolution after Ubuntu 9.10 installation

During a fresh installation of Ubuntu 9.10 (karmic) on a Dell Dimension 3000 standard system, the monitor resolution could no longer support any mode higher than 800x640. An initial twinge of frustration was coupled with confidence that the problem was tractable. This is the short story of how the Ubuntu and open source communities helped the solution.

Historically, whenever I encounter display problems I look at the xorg.conf file which is usually found in /etc/X11. On this installation day I was surprised to discover that xorg.conf was not in its usual place. Indeed, my surprise became something unameable when a search of the entire filesystem revealed that the file did not exist. While it is possible that a step or two after installation could have removed the file, the fact remains that some high-level procedure leaves me without an xorg.conf.

It was tempting to search out what mechanisms have replaced xorg.conf and to understand the advertised improvements of the new way, but in the short term I just wanted better resolution. Somewhere within the first few tens of minutes I found what turned out to be a solution, but I initially hoped for a simpler way. I learned a valuable lesson in this dismissal — read and understand before trading away over an hour searching for a presumed simpler way.

Searching the Ubuntu wiki revealed the article titled Reverting the Jaunty Xorg intel driver to 2.4 and my initial reaction was that my system (karmic) was newer than the one in the article (jaunty). Even so I saw the seven steps, four explicit and three others potential, and thought this might work. However, two of the possible steps (obtaining a validation key) were something I remembered doing before with some trouble. The other possible step, related to the key, was opening a port on my firewall. This final step is the one that sent me looking elsewhere, not because I didn't want to make the change but because I sought something that smacked less of system administration to the normal home user. In the end, neither of the potential steps were necessary.

The four steps to solving the resolution took less than 4 minutes to accomplish.

  1. Add the following two lines to the bottom of /etc/apt/sources.list (I used sudo vi /etc/apt/sources.list to do the edits; use your favorite editor as an administrator)
    deb http://ppa.launchpad.net/siretart/ppa/ubuntu jaunty main
    deb-src http://ppa.launchpad.net/siretart/ppa/ubuntu jaunty main
    
    • NOTE: I did indeed use 'jaunty' rather than 'karmic'.
  2. Update package list
    sudo apt-get update
    
  3. Install the xserver-xorg-video-intel-2.4 package
    sudo apt-get install xserver-xorg-video-intel-2.4
    
    • NOTE: A warning appeared that "packages cannot be authenticated!", but I was able to install "without verification." This is related to my inability to add the validation key and is not my preferred mode of operation. Even so, the key was not a necessary step in the present solution.
  4. Restart the display
    sudo /etc/init.d/gdm restart
    

The display immediately went to a much more reasonable resolution. Problem solved.

Saturday, February 13, 2010

Searching the contents of text files

EXECUTIVE SUMMARY

Search for the string 'Recipe' in all files that have the .org or .html extension anywhere in the current directory or below, ensuring that the filename is prepended to all matches:

> grep -e 'Recipe' `find .  \( -name "*.org" -o -name "*.html" \)` /dev/null

Same as above except all non-binary files are searched:

grep -HIre 'Recipe' *

SUPPORTING JABBER

Consider the situation where you have many text files in a certain directory tree and you want to discover which files have particular content. Here we discuss the use of grep and find to help solve this problem. Modern versions of grep remove the need to use find, and we will discuss that method after the one applicable to more disadvantaged systems.

The grep command is used to search the contents of files. A familiar output is to have the filename prepended to the line that matches the search, for example

> grep -e 'ground' *
photos.org:   background.  I also had the privilege of seeing the physical
photos.org:   ground on the night of the 27th.  On the morning of the 28th there
Quotes.org:going to take a lovely, simple melody and drive it into the ground. --

It is tempting to interpret the prepended filename as the overall default. However, whether the filename appears or not depends also on the context in which grep was used. Specifically, when grep is provided a single file to search through the filename is not prepended,

> grep -e 'ground' photos.org
   background.  I also had the privilege of seeing the physical
   ground on the night of the 27th.  On the morning of the 28th there

This is a reasonable behavior from the perspective of grep since only a single file was given there should be no doubt what file contained the match. As we will see below there are times when grep may be provided a single file but the user does not know what that file is. In these cases we want to force the filename to be identified. One way to do this is to pass grep the real file and one other file that has the following property; its contents will never match the search expression, for example /dev/null. Witness the difference,

> grep -e 'ground' photos.org /dev/null
photos.org:   background.  I also had the privilege of seeing the physical
photos.org:   ground on the night of the 27th.  On the morning of the 28th there

Before continuing there are two observations to be made about the grep invocations above. First, and almost as an aside, the calls could have been written just a bit more simply by dropping the -e switch and the quote marks. However, this construct allows for more complex search expressions. An example is to find either the word 'ground' or the word 'Recipe' in any files,

> grep -e 'ground\|Recipe' *
photos.org:   background.  I also had the privilege of seeing the physical
photos.org:   ground on the night of the 27th.  On the morning of the 28th there
Quotes.org:going to take a lovely, simple melody and drive it into the ground. --
Recipes.org:#+TITLE: Recipes
sitemap.org:   + [[file:Recipes.org][Recipes]]

The observation that pertains directly to the problem at hand is that the list of files for grep to search must be specified somehow. If all the files are in the same directory, then a simple wildcard expression might be all that is needed. However, sometimes the search is to be done recursively or across several directories.

The find command is useful for finding files on the system with particular characteristics. As an example, the following expression finds all files in the current directory and below that have either a .org or .html extension,

> find .  \( -name "*.org" -o -name "*.html" \)
backcountry/photos.html
backcountry/readme.html
backcountry/maintenance.html
backcountry/sitemap.html
backcountry/index.html 
 [--snip--]
templates/rketburt-01-Level00.org
templates/rketburt-01-Level01.org
 [--snip--]

Be aware, the space after the \( and before the \) proved to be vital while testing commands for this article. I am unaware if this is a general necessity or just on my particular system.

Now it is a simple matter to search the contents of multiple files. We build the file list using find embedded in backticks (`) to capture the result, then invoke grep on that list. Here is a complete example,

> grep -e 'Recipe' `find .  \( -name "*.org" -o -name "*.html" \)` /dev/null
rketburt-org/Recipes.org:#+TITLE: Recipes
rketburt-org/sitemap.org:   + [[file:Recipes.org][Recipes]]
rketburt/sitemap.html:<a href="Recipes.html">Recipes</a>
rketburt/Recipes.html:<title>Recipes</title>
rketburt/Recipes.html:<h1 class="title">Recipes</h1>
rketburt/index.html:<a href="Recipes.html">Recipes</a>

Note the use of /dev/null as a file argument to grep to ensure that the filename is prepended.

Another way to effect the same final result is to invoke find first and use the -exec argument to call grep. In this ordering grep is only provided with a single file which leads to the lack of filename problem indicated earlier. The overall syntax is a bit more cumbersome as well, since {} is used to pass the result of find to grep and there is the trailing \; as well. An equivalent example to the one in the previous paragraph is

> find .  \( -name "*.org" -o -name "*.html" \) -exec grep -e 'Recipe' {} /dev/null \;

Syntax or preferences aside, it is interesting to note that while these two examples provided the same end result, the one that begins with grep executed nearly 10 times faster.

The find command has been used above for two reasons. First, the desire was to search files that may appear in directories below the one called out. In other words we desired a recursive search. The second reason was to eliminate the prospect of searching non-text files which would have simply been a time sink. The method to exclude the binary files was to limit the file extensions to just two (.org and .html). This may be the exact behavior desired for some questions, but may be too restrictive for others.

Modern versions of grep permit both recursive searching (-r) and binary file exclusion (-I). Additionally, prepending the filename can be specified (-H) even in the event only a single file is searched. To find all text files in or below the current directory that contain the string 'Recipe', the command is now simply

grep -HIre 'Recipe' *

During testing for this article the time to complete was at its fastest only about twice that of the grep that uses the find in backticks, and at its slowest was over 100 times slower. This difference may have been due to the system load or possibly the fact that there were hundreds of files that together total nearly 2GB. Even so, there may be times when the blind search is well worth the time spent to discover something.