This month, MC Mag Online focuses on performance, which seems uncannily timely. You can't watch television for more than a few minutes without hearing about some sports figure's use of performance-enhancing drugs (or about supplements alleged to improve a difference sort of performance). But don't worry, I won't go there. Instead, I'll focus on the type of performance computer professionals are most concerned with: that of their computer systems.
What, Me Worry?
I've been in this business for more years than I like to admit. I remember writing software to structure the disks (thus enhancing the disk performance) on a PDP-11/34 running RSTS, and bit-twiddling algorithms to wring out the last iota of performance from the CPUs. When my company migrated from the System/34 to the System/38, I wrote some of my code in MI to make my software more responsive. And like many of you, I sat in classes at IBM Technical Conferences where the topic was OS/400 performance tuning. Ahhhh...the good old days.
Over the years, OS/400 has evolved--and with it, the tools to manage the system. The gurus in Rochester have automated most of those tedious tuning procedures to the point that most i5 users will never have to give performance tuning a second thought. Unfortunately, this same automation is not yet in Linux, so from a tuning standpoint, Linux is now at a point similar to that of OS/400 V2R3. This is not to say that Linux is a dog. On the contrary, it's quite snappy. But by keeping some simple things in mind, you can really make even mediocre hardware seem quick.
Less Is More
All current Linux distributions come on multiple CDs containing an amazing amount of software. The list of programs that will be presented to you during the system installation can be quite tantalizing. Unless you are creating a play system (a.k.a. development box), then you really need to resist the temptation to load every available package. Instead, load only those packages that support the function for which the machine is destined. There are some very good reasons to do this.
One is that by minimizing the software manifest, the disk space consumed will itself be minimized. (Now, that's obvious!) Although this reason may not seem particularly compelling (given the three-figure gigabyte disks that are now commonly available), you will find that by trimming the fat, your backup and restore times are shorter. We all know that system availability is always an issue, so anything we can do to shorten our backup window is a plus.
Another reason you will want to minimize the software manifest is because it will save you time during the install. There is no doubt that you will want to update your system immediately after loading it. Since virtually all distributions provide their updates via the Internet, and since most package management tools allow you to load new software as well updates from those networked repositories, it makes sense to initially load only a base system. That way, you can add any packages that you need from the network and get the most up-to-date versions of those packages at the same time. There is no use loading down-level software just to upgrade it immediately thereafter.
If the box on which you are loading Linux will be facing the Internet, then perhaps the most compelling reason to keep package count low is to minimize the tools available to a cracker, should he somehow get shell access to your machine. Providing a full suite of development tools on an Internet server is like building a bank inside a hardware store: Readily available tools make burglary very convenient. Maintaining a secure system is hard enough without handing the tools to the cracker. On the same note, by loading and running software services that you don't need, you provide additional possibilities for exploitation and more packages for you to have to police. If you don't need it, don't load it!
On my systems, I use Red Had Enterprise Linux and some of its clones, such as CentOS and White Box Linux. When I do an install, I usually choose the "minimum" install so that I get a base system, and then I add the packages I intend on using with the up2date and yum package management tools. The result is a lean, mean computing machine with just the basics installed. I can add whatever else I need later.
Of course, this advice is applicable to any operating system, not just Linux. I mention it only because so many people seem to revel in bloated software installs nowadays. Perhaps that's because it's so much easier to just install everything than it is to actually do some planning. And the current crop of hardware helps to hide this sin. Just keep in the back of your mind that there is always a performance penalty for every bit of software you have running. In your quest for the speediest system possible, less is more.
Don't Get Gooey!
I can't think of a bigger waste of CPU cycles and memory to inflict on a server than what gets wasted running a GUI interface. Unlike Redmond-esque OSes, where the GUI is integrated with the OS, Linux uses a separate application (X windows) to provide a graphical interface. You aren't obliged to waste the resources necessary to load and run one.
For a true iSeries bigot, a text user interface (TUI) is just fine, but I suppose the comfort level with a command line will diminish as IBM drives everyone to the iSeries Navigator. If you are already a GUI junkie, you have a couple of options in regards to Linux server administration that will minimize the performance impact.
First, you can load X windows on your system but use it only when necessary. To accomplish this, configure the server to boot into text mode (typically, run level 3). When you want to administer the system, you can log on as root and then issue the command startx, which will give you the GUI fix that you crave. Once you are done with whatever tasks you need to complete, simply log off from the GUI, and you'll find yourself back at the command line. The resources needed for the GUI will be released for more productive use.
Another option is to take advantage of the remote graphical capabilities of X windows: Log in on a different graphical workstation, and then use it to administer your servers. The secure shell software included with every Linux distribution makes this an exquisitely easy and attractive solution. As an added benefit, you don't need to waste calories walking into the machine room to get to a console. You can do it without ever letting your office chair get cold.
Finally, you can use something like Webmin, a Web-based administration client, to assist you in your administrative duties. The GUI-based administration tools normally provided by Linux distributions require X Windows to run. However, Webmin and its ilk take a different tack; they use CGI scripts on the server to effect the desired configuration changes. Same great GUI experience, but less filling! If you will be running a Web server (such as Apache) on the server, then there really isn't any appreciable resource cost to using Webmin. The only downside to using a third-party tool like Webmin is that the distribution's documentation will not be as useful, since the tools it describes won't be the same. On the other hand, Webmin works for a large number of Linux distributions as well as many UNIX variants and the BSDs, including Macintosh OS X server. So you can learn one interface to administer them all.
Application Tuning
Most of the servers I get involved with use the Apache Web server and one of the open-source database management systems such as PostgreSQL or MySQL. Installing one or more of these packages is extremely simple on the Red Hat-based systems I typically use. What you need to realize, however, is that the installations provided are plain vanilla configurations. They don't take into account the capacity of your machine but instead have a default configuration suitable for a typical "Joe six-pack" machine. The defaults will work fine for the majority of the users who employ them, but for the person interested in obtaining maximum performance, some reading of the documentation is in order. It just so happens that Apache, PostgreSQL, and MySQL are excellent examples of applications just begging to be tuned.
The Apache Web server on my Fedora Core 1 system has 44 modules configured to be loaded on startup. There are modules for IMAP access, modules for spellchecking, modules to enable proxy hosts, modules to allow alternate means of authentication, and a host of other modules. Do I actually need all of these loaded? No. If I were interested in maximizing the performance of Apache on my laptop, I would start by removing all of the cruft embedded within its configuration file, such as disabling any modules that I don't need for my Web sites. I can also improve Apache's responsiveness by tweaking the number of worker threads created and kept spare. There is a lot of room for tuning in that configuration file, and a little experimentation can yield some significant results.
What about the databases? In the documentation directory for MySQL, you will find four sample configuration files: my-large.cnf, my-huge.cnf, my-medium.cnf and my-small.cnf. The comments in "my-huge.cnf" start with these lines:
# Example mysql config file for large systems. # # This is for large system with memory = 512M where the system # runs mainly MySQL.
The comments also contain the recommended settings for this type of machine. Any of these configurations are drop-in replacements for the default MySQL configuration and are already optimized for a particular hardware environment. Just a couple of minutes of your time to make the switch will reward you with improved database performance.
Although PostgreSQL doesn't provide drop-in configurations like MySQL's, a quick scan of its configuration file shows a plethora of potential tweaks, all based on the size of your target database and of your machine. So where do you start? I keyed the query "postgresql performance" into google, just to see what I'd get. The first three results returned were Postgresql Database Performance Tuning, Postgresql Performance Tuning (Linux Journal), and Postgresql Performance Tips. The information is out there, easily located, and well worth your time to study.
Results from the queries "mysql performance" and "apache performance" were equally rewarding. The bottom line is that you will definitely want to research the applications that you are using to see what you can do to optimize their performance.
For the Hard-Core Bit-Twiddler
One of the extraordinary things about an open-source operating system is that all of the information you could possibly want about the OS is readily available. If you're hard-core, then you will want to install the kernel source package. Even if you don't plan on compiling the kernel, the documentation found within is worth the disk space consumed by the source tree. Some quality time spent with this can give you some interesting clues on how to change the behavior of the Linux kernel.
How easy is it to make changes to a running Linux system? The Linux file system has an interesting directory called "/proc" which, when displayed using the ls command, appears to contain a large number of other directories and files. In actuality, /proc is a view into the running kernel, not real files and directories. You can use its contents to learn about the hardware that the kernel has identified or the processes that the kernel is currently running. But the most important point for the bit-twiddler is that you can retrieve and modify the settings on which the kernel is basing its behavior. For this purpose, you'll want to investigate the /proc/sys/vm directory.
If you have a busy server, you can tune it for optimum performance. Running a Linux-laptop? Then you can tune your machine for optimum battery life. The options are endless! The whole topic of kernel tuning would constitute a series of articles, so I won't get into more detail in the short space I have here. Remember, google is your friend, so if this topic interests you, be sure to do a "linux kernel tuning" search, grab a cup of coffee, and settle in for some interesting reading.
Don't Settle for Average
Any stock Linux box, with sufficient hardware resources, will give you satisfactory performance for most tasks. The ability to easily make adjustments gives you the capability to wring out all of the performance that your hardware can muster. If things aren't working to your satisfaction, then by all means, do your homework and take advantage of the openness of open-source software. Until Linux catches up with i5/OS in terms of auto-tuning, you have no other choice.
Barry L. Kline is a consultant and has been developing software on various DEC and IBM midrange platforms for over 21 years. Barry discovered Linux back in the days when it was necessary to download diskette images and source code from the Internet. Since then, he has installed Linux on hundreds of machines, where it functions as servers and workstations in iSeries and Windows networks. He co-authored the book Understanding Linux Web Hosting with Don Denoncourt. Barry can be reached at
LATEST COMMENTS
MC Press Online