« Posts by The Geek

A Tale of Heat and Peace

I am a geek. Let’s just get that out of the way right now. I am a very big geek.

I run servers for a living, but mainly I don’t get to play with them. I get to configure them and fix them when they break, but generally… I just get them to work and make sure they keep working, I never get to play.

That is where my home server closet comes in. My home server closet is my playground. I get to play with things, test out new software, basically break things and have fun in the process. My server closet at home helps me do that.

It really is a closet, a small walk-in closet 4 foot by 3 foot, with a bakers rack full of computer equipment, monitors, keyboards, cables, and the like. There are servers in there too. Three of them actually.

Server 1 is my media server (pragmatic). This server handles streaming duties, video from an onboard tv tuner (mythtv) as well as some streaming mp3s (icecast).

Server 2 is my development server and general all around do boy (tragic). This server handles samba (windows based file sharing), tinydns (dns for the internal network), dnscache (to handle dns requests to the clients in the local network), apache2, PHP4, PHP5, SVN, etc…

Server 3 is my phone system server (magic). This server handles only one thing, my phone lines. It serves as a media gateway allowing me to connect POTS (Plain Old Telephone System) lines to VoIP services like Voneage (which I don’t use, but there are others out there like them, just not as popular). The software that handles this is called Asterisk, and by all accounts it is a pretty amazing piece of open source software made by a company called Digium. The software itself is free, but Digium makes their money off the hardware, and considering how good the software is… I am inclined to throw money their way as often as I can.

Anyways…

Server 3… “Magic” has been the bane of my existence since I put it into service about a year and half ago. It isn’t so much that the hardware is bad, but rather the hardware in relation to the environment it was in. Magic has always had an IBM Ultrastar 9GB drive since the day I built it. 10K RPM of SCSI lovin. It is fast… really fast… and pretty reliable… only one or two problems. It is HOT and LOUD. Really loud.

So enters the paradox that is the server closet. The server is loud, as such, I don’t want the noise in my home office, so I close the door. Enter the other problem, heat. As I close the door to the closet, temperatures in the server closet climb to insane levels. So then I have to open the door to let the heat out. Lather, Rinse, Repeat.

So finally today… the heat was unbearable… and was starting to effect the PBX cards in the machine. So I decided to see why. Turns out the PSU fan died, and the computer was just roasting inside. So I pulled it out to replace the PSU and decided that I wanted to remove the noisy SCSI drive. So I start looking around for my Ghost disks… Long story short… no matter what I tried Ghost refused to see the SCSI drive.

Enter my savior: I have had a copy of Ultimate Boot CD for a couple of years now. So after blowing a couple of hours with Ghost. I figured what the heck. I threw it in, did a little g4l (Ghost 4 Linux) diskcopy sc0 wd0 mojo and it did its thing. I rebooted fully expecting it not to work… but much to my surprise, it worked flawlessly. Just a couple of adjustments to the grub loader to allow it to boot from the IDE drive instead of the SCSI drive and I was in business.

So now… I am basking in the quiet and reduced heat levels in my server closet. Life is good, and geek points restored.

Mod_Rewrite: A Deeper Look

I have used mod_rewrite to great extent in the past with great success. It is great to handle search engine friendly urls. This article won’t be covering how to do that sort of thing as that has been beat to death, rather this article will show you some of the quirks of using mod_rewrite with apache and apache’s different configuration options.

I have used mod_rewrite a great deal and I consider myself to be well versed in how to set it up and use it. That being said, I learned a couple of little quirks about using it in the apache config file and how these might surprise you.

Prior to today I have always used mod_rewrite in a .htaccess file. I feel safe in saying that 100% of my usage of mod_rewrite to date has been via .htaccess. So today when mod_rewrite wasn’t working I tried to use my normal methods of debugging and was stumped as to why it wasn’t working.

First off a little background: We were running mod_rewrite from within the httpd.conf file. Further more we were actually calling RewriteEngine On from inside a virtualhost apache directive. In the past people have told me this is a faster way of using mod_rewrite so we decided to use it as this was a some what speed sensative server. What I discovered is that running mod_rewrite within this directive changes the way mod_rewrite directives work.

Previously when using mod_rewrite via .htaccess I was always able to turn on logging using the directives in the “root” level of the httpd.conf file:

RewriteLog “/var/log/apache/modrewrite.log”
RewriteLogLevel 10

Lesson 1 learned: This works great if you are running mod_rewrite from a .htaccess, however it has zero effect if you are running mod_rewrite from inside a virtualhost directive. For this to work properly in the virtualhost you must place the RewriteLog directives inside the virtualhost directive where you are using RewriteEngine On. Good to know…

Once I turned on the logging I was able to determine that my regex pattern matching for mod_rewrite was missing my target.

Lesson 2 learned: When you are running mod_rewrite in virtualhost you can not use RewriteBase. This means you must manually correct for this in your regex to match the leading / and the trailing / match.

Hopefully those will help out others who aren’t getting the results the expect.

And now for those of you that have made it this far down, here are some fun things you can do with mod_rewrite.

Have all files that don’t have an extension be passed to the PHP parsing engine. This is particularly usefuly for creating scripts that look like directories. So you have http://somedomain.com/foobar/action/1/name/smith/ and first part of the url (foobar) is actually a PHP script that the rest of the url gets passed to. Sneaky huh? Here it is:


ForceType application/x-httpd-php

I hope this posting helps others in using the powerful apache module mod_rewrite and maybe prevent some lost hairs in the process.

How Is It Possible?

How is it possible that people who are running an online company don’t have a clue when it comes to technology?

Case in point today: I was handed a project to take care of for a client. They want me to take input from a form, validate it, store it in a local database and then create a CSV file and send that via FTP to a remote server. Not a difficult request, but also not a terribly secure one either. So I decided I would call the company that was supposed to be getting the CSV via FTP and see if I could just POST the data to a web form instead. So I called them…

Me: “Yeah, hi. This is so and so from company and I was wondering if you guys had a web form that I could send this CSV/FTP data to instead of the CSV/FTP method.”

Other Guy: [:long pause:] “Uhm… I have no idea what you are talking about. The CTO is out of town and won’t be back until tomorrow. Let me ask [so and so].”

Me [now with so and so on the phone listening]: “Yeah I would like to just send this via curl data post.”

At this point I could tell that the loud sucking sound coming from the other end of the line wasn’t a good sound. So I tried to explain myself further.

Me: “You know… An online form… Where you type stuff into it, and it saves it some place… like a database… and then you guys can create your CSV from that instead. It would save me a lot of time.”

Other Guy again: “Well this plan has been laid out for two weeks and it is supposed to go live tomorrow. Don’t you think it is a little late in the game to change things?”

At this point I was annoyed… Here I am two hours into this project, and he is talking down to me like I had been there through out the entire thing?!? WTF?

Me: “You know you are certainly right _IF_ I had been involved at that point. And I wasn’t. So… getting back to my question, can you do it?”

Other Guy: “Well we would have to get the CTO involved.”

You have got to be f*cking kidding me. You need the CTO to tell you how to make a web form? And you people run a web based business?!?!

At his point I decided to cut my losses.

Me: “Okay I will create the CSV and FTP the file. I will need the FTP information as well as the CSV format.”

Other Guy: “Well we have already sent that to Bob.”

Me: [leaning over to Bob] “Did you get that CSV formatting and FTP info from ComapnyB?”

Bob: “Nope they never sent it.”

Me: “He says you never sent it. So please send it so that I can continue my work.”

That was 3 hours ago. I am still waiting for the CSV file.

Its no wonder the dotcom bubble burst. If any of the companies Were run this way its a miracle they survived.

Busy as a One Legged Man in an Ass Kicking Contest


I admit that I have been remiss about updating my blog. I have been assisting (in any small way that I can) to help in the hurricane relief by providing support to some companies that desperately need some assistance.

In the meantime, here is an image that I created in Photoshop for an upcoming project. Obviously the MSN “blue guy” is my influence as well as some of the images from foood.net.

If there is some interest in how to create something like this, please let me know and I will work up a tutorial for doing it.

FreeBSD Device Polling

I previously wrote a “How-To” article detailing how to setup VLAN Tagging (802.1q) using FreeBSD and a Cisco 2924-XL-EN. You can read the article for more information, but basically VLAN tagging is a way isolate traffic to particular ports on a switch. In addition that How-To also covers how to use the FreeBSD machine to provide rate limiting and firewall protection to those VLANs.

Anyways… In that article I touch on the fact that we must use device polling to avoid context switching of the kernel back and forth between userland applications and kernel processes. To truly understand the benefits of device polling you should read the device polling web page which explains the reasons behind device polling better than I ever could hope to.

So I never got a chance to go back and write the information on how to set up device polling like I wanted to… Until now.

First some quick and pretty graphs of a P4 server pushing about ~50Mbps (core router for a data center) without device polling enabled:


1 users Load 0.00 0.00 0.00 Aug 29 22:54

Mem:KB REAL VIRTUAL VN PAGER SWAP PAGER
Tot Share Tot Share Free in out in out
Act 16496 4784 24640 5304 645424 count
All 375240 5984 2742020 7080 pages
Interrupts
Proc:r p d s w Csw Trp Sys Int Sof Flt cow 16713 total
1 6 10 1 6016714 3 3 103724 wire stray irq7
16412 act em1 irq5
1.9%Sys 21.5%Intr 0.0%User 0.0%Nice 76.6%Idl 255080 inact em2 irq12
| | | | | | | | | | 24 cache 7133 em3 irq10
=+++++++++++ 645400 free 1 ata0 irq14
daefr 1355 mux irq11
Namei Name-cache Dir-cache prcfr fdc0 irq6
Calls hits % hits % react 1000 clk irq0
pdwak 128 rtc irq8
zfod pdpgs 3 mux irq5
Disks ad0 ofod intrn 7093 mux irq12
KB/t 2.00 %slo-z 113712 buf
tps 1 tfree 3 dirtybuf
MB/s 0.00 69954 desiredvnodes
% busy 0 59766 numvnodes
26 freevnodes

Note the long line of “+” signs. Those are IRQ interrupts that the CPU has to handle. At the time this screen scrape was taken, over 20% of the CPU was spent handling IRQ requests from device em3 (right column 7153 IRQ requests).

Now the same exact server with the same exact data flow:


1 users Load 0.00 0.00 0.00 Aug 29 22:55

Mem:KB REAL VIRTUAL VN PAGER SWAP PAGER
Tot Share Tot Share Free in out in out
Act 17504 4784 26020 5304 645424 count
All 375240 5984 2742020 7080 pages
Interrupts
Proc:r p d s w Csw Trp Sys Int Sof Flt cow 1124 total
8 9 1 50 1124 4 3 103724 wire stray irq7
16408 act em1 irq5
3.1%Sys 0.8%Intr 0.0%User 0.0%Nice 96.2%Idl 255084 inact em2 irq12
| | | | | | | | | | 24 cache em3 irq10
== 645400 free ata0 irq14
daefr mux irq11
Namei Name-cache Dir-cache prcfr fdc0 irq6
Calls hits % hits % react 996 clk irq0
pdwak 128 rtc irq8
zfod pdpgs mux irq5
Disks ad0 ofod intrn mux irq12
KB/t 0.00 %slo-z 113712 buf
tps 0 tfree 4 dirtybuf
MB/s 0.00 69954 desiredvnodes
% busy 0 59766 numvnodes
26 freevnodes

Now note the difference. The CPU is now free to handle other things instead of IRQ requests. That is the power of device polling.

Here is how you set it up.

First you have to make sure that the network interface you are using supports device polling. The author’s web site linked above lists support for “fxp” (Intel 10/100 cards), “sis” (SiS based network cards – not sure which), and “dc” (Some sort of DEC card). One card which is not listed on that page, but I know supports device polling is the “em” card (Intel Gigabit cards). If there are other cards that support device polling please let me know. I have only used fxp and em cards so I know for a fact that they work.

So now that we know we have a network interface that is going to support polling (it is really the device driver that supports polling) we can get started on adding the support to the kernel. This is really simple and is comprised of two directive in the kernel config file:

options DEVICE_POLLING

options HZ=1000

Once you have those in your kernel config, you can recompile your kernel and boot it up. If you need help doing that part consult the FreeBSD How-To for creating a custom kernel (and you are running a custom kernel with FreeBSD, right?).

Now you have a computer that has a network card that supports device polling, a kernel that is device polling enabled as well, a real time clock that is more responsive (the HZ=1000 line), and the kernel is booted and we are ready to rock and roll. Now we just need to issue:

sysctl kern.polling.enable=1

Now we have it enabled and you are all set to go forth into that dark night and let your devices be polled.

I hope that sheds some light on a subject that can be a of great use for high network traffic servers (routers, firewalls, proxies, etc).

Let me know if you found it useful.

Keeping Your Server Up To Date Easily

Many distros have easy ways to keep the OS up to date with security fixes and patches. Probably the easest to use of this group is apt-get. Apt-get originally started off with the Debian distribution where it is responsible for not only updating software, but also install and removing software easily.

Additionally… Some one has ported apt to the Fedora/Redhat distribution where it (in my opinion) blows away any other implementation (yum, up2date, etc).

I install apt on all my Fedora machines and use its features to find, install and keep up to date the server’s installed software.

Probably the biggest problem with Fedora these days is that Fedora’s OS is phased out very quickly. What do you do if you have Fedora Core 1 or Core 2 installed on your server, and the Fedora project has (as of this posting) moved on to other releases. Enter apt, and the Fedora Legacy group. Using Fedora Legacy you can keep your OS up to date with recent patches using their apt repository. You can read more about their repositories here.

You can setup apt under Fedora and point apt to use Fedora Legacy’s apt repository and keep your server up to date quickly and easily.

Here are some handy commands to run with apt:

apt-get update

This command is to update apt and its sources list to ensure that it is looking for the correct version of files to download and install. As a general rule you should run this at least once before using apt.

apt-get install {name of package}

This command will go out to the respoitory and fetch the latest version of the package you specify. eg: apt-get install httpd will install the latest version of apache if it is not installed yet.

apt-get remove {name of package}

This command will uninstall the software package you specify. eg: apt-get remove httpd would uninstall the httpd package if it was installed.

apt-cache search {name of package}

This command is extremely handy if you can’t find the name of the package you are looking for. Sometimes, especially with lib packages, the names can very greatly. apt-cache search php would show you a listing of all the package names that contain php or contain php in the description.

apt-get upgrade

This command is the best command of all. It will look at your server and the list of packages installed, and then it will look at the repository to determine which packages you have installed have been updated. It will then download and install those packages. Very handy. I run this once a week for a server.

Some packages are excluded from being downloaded. Of those is the kernel. On some servers you don’t want to upgrade the kernel if you don’t have to, but sometimes you want to do that because of local user kernel exploits. To get a list of available kernels to install you can run:

apt-get install kernel

This will output a list of kernels that are available and you can select a name from the list to force it to be installed.

I should note that installing a kernel can be risky business, especially when you are not local to the server. Having said that, I have been very lucky with installing kernels from apt-get.

I hope this information shows you that maintaining a server and its software isn’t as hard as you think it is. It just takes a little setup and the right tools. Using apt-get and Fedora Legacy’s apt repositories, you can keep your server safe and secure.

Tar Over SSH

Recently I did an entry on SCP (Secure CoPy) which uses SSH to copy a single file over a secure ssh tunnel to a remote server or to copy a remote file to a local directory. This works great for a single file, but what if you want to do an entire directory?

Well one way is to tar up the directory, then copy the file to the remote server (using scp perhaps?) and then login to the remote server via SSH (you aren’t using telnet any more right?) and then untar the file on the remote side. Pretty simple, but since we are geeks, we try to do things as efficiently as possible (even if there are better solutions).

Enter tar over SSH.

Tar has the great ability to send data to stdout/stdin using the “-” (dash) as a filename in the command line. So using that we can string together pipes to send the data to a remote server. Let’s explore how:

tar -zcf – ./ | ssh remoteuser@remotehost tar -C /path/to/remote/dir -zxf –

What this does is pretty simple: it creates a compressed tar file of the current directory (./) and sends it to stdout (-). We catch stdout with the pipe (|) and then call ssh to connect to a remote server where we execute the remote command tar. Using the remote tar we change directory to /path/to/remote/dir and then run a decompress extraction of stdin (-).

About the only caveat of this method is that tar must be in the “remoteuser’s” path, otherwise you would have to specify the fully qualified path to the binary for tar.

Great way to transfer a bunch of files securely, as well maintain ownership and permissions.

Try and come up with some other uses, and post them in the comments.

Web Development Tools

Today’s blog post is going to be a little bit off topic for me… Still something that I deal with… but not 100% related to hosting in general…

Web development tools.

I do development in PHP. It helps pay the bills, and keeps me busy, plus in some weird sort of way it gives me a creative outlet.

Recently I had to build a rather large intranet/internet application for a gymnastics company (you know the kind where you take your kids there and the teach them how to tumble and not break their necks while doing so). Anyways… One of the design criteria that was imposed was to make sure that the site was standards compliant, with the client specifically outlining XHTML compliant.

So I setup the site with templates (since I am a PHP programmer and I know it well, I used Smarty). Making a site that is standards compliant as well as CSS enabled is not easy, as I quickly found out. CSS coding is an art form, because not only do you have to apply the CSS to the site and make sure it works as you intended, but you also have to check it in other browsers to make sure it renders properly there as well (a particular area that IE fails miserably at).

Enter one of the most useful extensions for the FireFox browser to date, Web Developer. To say that Web Developer, or webdev, is good would demonstrate a textbook definition of understatement. Flat out… IT ROCKS.

Probably the feature I use the most is the “View Style Information” feature that allows you to select any CSS element on a rendered page in your browser (regular web page) and see the CSS method behind it. This is such a great feature that it is the primary reason I started using webdev in the first place. About 8 months ago I had a page that I couldn’t figure out how to get to render how I wanted. I must have played with the style sheet for 3 hours before a friend recommended I try this FireFox extension. Within 10 minutes I had figured out the stylesheet quirk and fixed it. Amazing.

But it has other features too… It can outline block level elements (divs, p, etc..), tables, and table cells. It has built in menu items for CSS and HTML validation (using the W3 validator websites). It can display passwords hidden in input/password fields (you can view the source sure, but this is so much easier). You can automatically populate form fields for testing. You can even clear HTTP based authentication, session cookies… It is like the swiss arm knife for web development (complete with the little toothpick).

Another great extension that I use for web development is ColorZilla. It is another extension that sits in the status bar of FireFox and has an eye dropper that you can use to sample any color on the page. It doesn’t matter if it is a picture, a CSS style element, text, whatever… You can get the hex code for the color as well as RGB code for it. I think I use it almost as much as I use the webdev extension.

So there you go… Two of the most useful FireFox extensions for web developers. Load them into FireFox today and start basking in the glow.

How To SSH Without Passwords

One of the greatest features of SSH is the ability to use key based authentication. Key based authentication uses the public/private key method to allow logins to SSH without the use of passwords.

This method is better than password based authentication as a password can be brute forced. While it is theoretically possible to brute for a public/private shared key combination, the amount of computing power is pretty stagering, as well as the amount of time invloved to accomplish this using standard computers today. (Quantum computing is said to be able to accomplish this same feat in seconds… But as of yet quatum computers don’t exist outside of the lab that are large enough to accomplish this… so we are safe… for now.)

It works pretty simply. You create a private and public key pair for your login. You can do this by running:

ssh-keygen -t dsa

This will generate a public and private key using the DSA method (you can also use RSA).

You will be prompted on where to store the key pairs, as well as for a passphrase. The passphrase will protect the private key from unauthorized usage, but also negate any advantage of using keys to authenticate automattically without passwords (there is a way to do this but it is outside the scope of this current discussion and will be another blog entry). I would recommend skipping the passphrase for the time being.

Once you have created your public and private key pairs (in your home directory in the .ssh directory generally). You will need to copy the public key to each server you wish to authenticate with, and not have to type a password to do so. You can use the scp method (covered in a previous post) to copy the files to the remote server under the username you login in the .ssh directory in your home folder (so for user xyz it would be /home/xyz/.ssh). After you have copied that file to the server, you will need to copy that file to the file authorized_keys. You can do that by running:

cat /home/xyz/.ssh/dsa.pub >> /home/xyz/.ssh/authorized_keys

Now to test it, type in ssh xyz@remoteserver and you should be logged in right away with no password prompt. If you didn’t do it right, then you will be prompted for a password.

Now I have to mention… Because we aren’t usnig a passphrase for your private key… if somebody were to get control of your private key, they can now login to all the servers that use that private/public key combination. So keep that in mind.

I use key based authentication for most of my servers, because I have gotten so tired of password based brute force attacks on my SSH daemon. I keep my private keys on a keyfob on my keychain. So if I have my car keys… I have access to my servers. I also have the private keys in another safe place… and if I told you where that was… I would have to kill you… So lets leave it at that.

How To Torment Script Kiddies

Recently while working on a server, I noticed that there was some unusual files in the /tmp directory. This always sends up red flags in my head so I investigated more closely and determined that somebody had placed a file on the server via a PHPBB exploit and was using it as a means of building a zombie network.

Typically this type of activity is closely linked to “script kiddies” and not legitmate hackers. Script kiddies are people with no more hacking ability than anybody else, they simple know how to read about holes in certain software and use other legitmate hackers work to exploit those holes, to some unknown end.

This case was no different. Here are some fun things you can do to mess with them. Many times the script kiddies will leave the software on the server, and that itself is a goldmine of information.

Case in point, I found a binary on the server that was connected to an IRC server. I did a quick review of the process list (ps -auxwww) and determined the process id of the application running. Then I ran:

strace -s 16000 -p {process id}

This command is called stack trace and will attach to an already running application to see what it is doing. This is particularly useful if the binary in question is no longer on the hard drive.

So I attached to the process and was able to determine that it was connected to an IP address on port 6667. Port 6667 is one of the default ports for IRC. So I fired up my trusty IRC client and connected to it. Sure enough I was connected to a server with 106 clients and 3 operators.

So now I had to figure out what channel to join. Lucky for me the binary was still on the hard drive. Here is where the second part comes in. Because script kiddies only use the software and they don’t know how it actually works, we can often get more information from the binary. So to glean information from the binary I ran:

strings {filename}

This printed out a long list of text based strings in the binary. The best part is, the IRC channel that the zombie was supposed to join on the IRC server, and the password for the IRC channel, were right there in front of me. So… I joined the channel.

I must have scared the living hell out of the operator of the zombie network, because as soon as I started talking to him, I was firewalled off of the server completely. All new connections coming into the server were also blocked. Luckily I was able to collect a list of IP addresses of the zombies in the channel before I was disconnected, and I have been systematically notifying the operators of those machines that they have been comprimised.

I am sure somewhere in the world… a script kiddie had to go clean out his pants after that episode. I hope I put a dent in his “zombie network”.

Armed with some of this information I hope that you will be able to torment script kiddies when you encounter them. Not all of them are this easy… But when they make it easy… I say take the time to mess with them… If nothing else it may scare them just enough to stop for a little bit.