m0n0wall Firewall

I have to admit that I don’t really like GUI interfaces. I use them, but often I find that I move more efficently in a text based enviroment. I am so bad that I often find :wq in various text files that I edit in Windows (:wq is vi short hand for “write then quit”).

When it came time to find a firewall application, I was not keen on some of the more “flashy” alternatives out there. I had my heart set on a Cisco PIX and was planning on using that until I discovered m0n0wall.

m0n0wall had all the features I was looking for. Stateful firewall, logging, could run from an optimized compact flash device, supported VLANs (802.1q), supported device polling, and last but not least, ran FreeBSD.

I have to say that my experience with m0n0wall over the past month has really proven to me that there is some really great software out there. It just works. It is flexible enough that is does exactly what I need, efficient enough that I don’t dread making changes to it, and out of all the things I have used for firewalls it is easy to learn and not difficult to master.

The best part is that it does so much more than what I am using it for. Not only does m0n0wall excel as a border firewall/router, but you can use it at home as well. It fully supports NAT, wireless network cards, VPN endpoints, etc… Couple this software with some soekris hardware and you could easily duplicate the functionality of the Linksys. Granted this option might cost you a little bit more money, but that also comes with added flexibility in how you can configure the device, not to mention more CPU intensive operations like VPNs.

If you are considering a firewall for your network, personal or professional, I highly recommend the m0n0wall application.

Network Booting Your Computer

Look, I know it has been a while since I posted. I am a busy guy, and the blog is one of those “you know I will do that tomorrow” things.

Here is a quick write up of how I took a PC Chips 871G motherboard, and turned 4 of them into network bootable servers. It is worth it trust me.

The PC Chips 871G is an SiS chipset based motherboard, with an SiS900 based onboard LAN. Out of the box the motherboard comes with a network boot option, but it is RPL, and pretty much useless unless you are running Netware, and I haven’t touched Netware since 4.0.

The new universally supported standard of network booting is PXE, so we are going to need to get the computer to boot up using PXE. Here is where things get very interesting.

You have to hack the BIOS. It isn’t for the faint of heart, and I actually killed two bios images before I got this right. As an aside you can flash a non-booting BIOS by booting up the same motherboard with the same type BIOS (like I said I have 4 of them) and then while the computer is running, you pull the working BIOS out of the socket – CAREFULLY. Then you place the dead BIOS chip into the socket and reflash it using your known good BIOS image. I got pretty good at this, I am sorry to say.

You will need a program to edit your BIOS images, there are plenty of places on the internet to find the utilities to do this, but it takes some patience to find one that will work for you. My BIOS was an AMI BIOS, so I had to find AMI BIOS tools. Once you have the tools you can download the latest BIOS from PC Chips, and then grab the latest ROM BIOS for the LAN card. You can get this from the handy site Rom-O-Matic.

The BIOS image I had to work with, was limited in space. The current RPL part of the BIOS was 16KB in size, however the new Etherboot image was 32KB in size, so I had to remove some other things from the BIOS. I removed the RPL portion of the image as well as the SATA RAID software and, boot screen images.

After I flashed the BIOS with this new image, I had the ability to select a new option from the boot menu “Etherboot SIS900”.

The BIOS boots as normal and you can boot into a PXE boot environment to load the OS of your choice.

A Tale of Heat and Peace

I am a geek. Let’s just get that out of the way right now. I am a very big geek.

I run servers for a living, but mainly I don’t get to play with them. I get to configure them and fix them when they break, but generally… I just get them to work and make sure they keep working, I never get to play.

That is where my home server closet comes in. My home server closet is my playground. I get to play with things, test out new software, basically break things and have fun in the process. My server closet at home helps me do that.

It really is a closet, a small walk-in closet 4 foot by 3 foot, with a bakers rack full of computer equipment, monitors, keyboards, cables, and the like. There are servers in there too. Three of them actually.

Server 1 is my media server (pragmatic). This server handles streaming duties, video from an onboard tv tuner (mythtv) as well as some streaming mp3s (icecast).

Server 2 is my development server and general all around do boy (tragic). This server handles samba (windows based file sharing), tinydns (dns for the internal network), dnscache (to handle dns requests to the clients in the local network), apache2, PHP4, PHP5, SVN, etc…

Server 3 is my phone system server (magic). This server handles only one thing, my phone lines. It serves as a media gateway allowing me to connect POTS (Plain Old Telephone System) lines to VoIP services like Voneage (which I don’t use, but there are others out there like them, just not as popular). The software that handles this is called Asterisk, and by all accounts it is a pretty amazing piece of open source software made by a company called Digium. The software itself is free, but Digium makes their money off the hardware, and considering how good the software is… I am inclined to throw money their way as often as I can.

Anyways…

Server 3… “Magic” has been the bane of my existence since I put it into service about a year and half ago. It isn’t so much that the hardware is bad, but rather the hardware in relation to the environment it was in. Magic has always had an IBM Ultrastar 9GB drive since the day I built it. 10K RPM of SCSI lovin. It is fast… really fast… and pretty reliable… only one or two problems. It is HOT and LOUD. Really loud.

So enters the paradox that is the server closet. The server is loud, as such, I don’t want the noise in my home office, so I close the door. Enter the other problem, heat. As I close the door to the closet, temperatures in the server closet climb to insane levels. So then I have to open the door to let the heat out. Lather, Rinse, Repeat.

So finally today… the heat was unbearable… and was starting to effect the PBX cards in the machine. So I decided to see why. Turns out the PSU fan died, and the computer was just roasting inside. So I pulled it out to replace the PSU and decided that I wanted to remove the noisy SCSI drive. So I start looking around for my Ghost disks… Long story short… no matter what I tried Ghost refused to see the SCSI drive.

Enter my savior: I have had a copy of Ultimate Boot CD for a couple of years now. So after blowing a couple of hours with Ghost. I figured what the heck. I threw it in, did a little g4l (Ghost 4 Linux) diskcopy sc0 wd0 mojo and it did its thing. I rebooted fully expecting it not to work… but much to my surprise, it worked flawlessly. Just a couple of adjustments to the grub loader to allow it to boot from the IDE drive instead of the SCSI drive and I was in business.

So now… I am basking in the quiet and reduced heat levels in my server closet. Life is good, and geek points restored.

Mod_Rewrite: A Deeper Look

I have used mod_rewrite to great extent in the past with great success. It is great to handle search engine friendly urls. This article won’t be covering how to do that sort of thing as that has been beat to death, rather this article will show you some of the quirks of using mod_rewrite with apache and apache’s different configuration options.

I have used mod_rewrite a great deal and I consider myself to be well versed in how to set it up and use it. That being said, I learned a couple of little quirks about using it in the apache config file and how these might surprise you.

Prior to today I have always used mod_rewrite in a .htaccess file. I feel safe in saying that 100% of my usage of mod_rewrite to date has been via .htaccess. So today when mod_rewrite wasn’t working I tried to use my normal methods of debugging and was stumped as to why it wasn’t working.

First off a little background: We were running mod_rewrite from within the httpd.conf file. Further more we were actually calling RewriteEngine On from inside a virtualhost apache directive. In the past people have told me this is a faster way of using mod_rewrite so we decided to use it as this was a some what speed sensative server. What I discovered is that running mod_rewrite within this directive changes the way mod_rewrite directives work.

Previously when using mod_rewrite via .htaccess I was always able to turn on logging using the directives in the “root” level of the httpd.conf file:

RewriteLog “/var/log/apache/modrewrite.log”
RewriteLogLevel 10

Lesson 1 learned: This works great if you are running mod_rewrite from a .htaccess, however it has zero effect if you are running mod_rewrite from inside a virtualhost directive. For this to work properly in the virtualhost you must place the RewriteLog directives inside the virtualhost directive where you are using RewriteEngine On. Good to know…

Once I turned on the logging I was able to determine that my regex pattern matching for mod_rewrite was missing my target.

Lesson 2 learned: When you are running mod_rewrite in virtualhost you can not use RewriteBase. This means you must manually correct for this in your regex to match the leading / and the trailing / match.

Hopefully those will help out others who aren’t getting the results the expect.

And now for those of you that have made it this far down, here are some fun things you can do with mod_rewrite.

Have all files that don’t have an extension be passed to the PHP parsing engine. This is particularly usefuly for creating scripts that look like directories. So you have http://somedomain.com/foobar/action/1/name/smith/ and first part of the url (foobar) is actually a PHP script that the rest of the url gets passed to. Sneaky huh? Here it is:


ForceType application/x-httpd-php

I hope this posting helps others in using the powerful apache module mod_rewrite and maybe prevent some lost hairs in the process.

How Is It Possible?

How is it possible that people who are running an online company don’t have a clue when it comes to technology?

Case in point today: I was handed a project to take care of for a client. They want me to take input from a form, validate it, store it in a local database and then create a CSV file and send that via FTP to a remote server. Not a difficult request, but also not a terribly secure one either. So I decided I would call the company that was supposed to be getting the CSV via FTP and see if I could just POST the data to a web form instead. So I called them…

Me: “Yeah, hi. This is so and so from company and I was wondering if you guys had a web form that I could send this CSV/FTP data to instead of the CSV/FTP method.”

Other Guy: [:long pause:] “Uhm… I have no idea what you are talking about. The CTO is out of town and won’t be back until tomorrow. Let me ask [so and so].”

Me [now with so and so on the phone listening]: “Yeah I would like to just send this via curl data post.”

At this point I could tell that the loud sucking sound coming from the other end of the line wasn’t a good sound. So I tried to explain myself further.

Me: “You know… An online form… Where you type stuff into it, and it saves it some place… like a database… and then you guys can create your CSV from that instead. It would save me a lot of time.”

Other Guy again: “Well this plan has been laid out for two weeks and it is supposed to go live tomorrow. Don’t you think it is a little late in the game to change things?”

At this point I was annoyed… Here I am two hours into this project, and he is talking down to me like I had been there through out the entire thing?!? WTF?

Me: “You know you are certainly right _IF_ I had been involved at that point. And I wasn’t. So… getting back to my question, can you do it?”

Other Guy: “Well we would have to get the CTO involved.”

You have got to be f*cking kidding me. You need the CTO to tell you how to make a web form? And you people run a web based business?!?!

At his point I decided to cut my losses.

Me: “Okay I will create the CSV and FTP the file. I will need the FTP information as well as the CSV format.”

Other Guy: “Well we have already sent that to Bob.”

Me: [leaning over to Bob] “Did you get that CSV formatting and FTP info from ComapnyB?”

Bob: “Nope they never sent it.”

Me: “He says you never sent it. So please send it so that I can continue my work.”

That was 3 hours ago. I am still waiting for the CSV file.

Its no wonder the dotcom bubble burst. If any of the companies Were run this way its a miracle they survived.

Busy as a One Legged Man in an Ass Kicking Contest


I admit that I have been remiss about updating my blog. I have been assisting (in any small way that I can) to help in the hurricane relief by providing support to some companies that desperately need some assistance.

In the meantime, here is an image that I created in Photoshop for an upcoming project. Obviously the MSN “blue guy” is my influence as well as some of the images from foood.net.

If there is some interest in how to create something like this, please let me know and I will work up a tutorial for doing it.

FreeBSD Device Polling

I previously wrote a “How-To” article detailing how to setup VLAN Tagging (802.1q) using FreeBSD and a Cisco 2924-XL-EN. You can read the article for more information, but basically VLAN tagging is a way isolate traffic to particular ports on a switch. In addition that How-To also covers how to use the FreeBSD machine to provide rate limiting and firewall protection to those VLANs.

Anyways… In that article I touch on the fact that we must use device polling to avoid context switching of the kernel back and forth between userland applications and kernel processes. To truly understand the benefits of device polling you should read the device polling web page which explains the reasons behind device polling better than I ever could hope to.

So I never got a chance to go back and write the information on how to set up device polling like I wanted to… Until now.

First some quick and pretty graphs of a P4 server pushing about ~50Mbps (core router for a data center) without device polling enabled:


1 users Load 0.00 0.00 0.00 Aug 29 22:54

Mem:KB REAL VIRTUAL VN PAGER SWAP PAGER
Tot Share Tot Share Free in out in out
Act 16496 4784 24640 5304 645424 count
All 375240 5984 2742020 7080 pages
Interrupts
Proc:r p d s w Csw Trp Sys Int Sof Flt cow 16713 total
1 6 10 1 6016714 3 3 103724 wire stray irq7
16412 act em1 irq5
1.9%Sys 21.5%Intr 0.0%User 0.0%Nice 76.6%Idl 255080 inact em2 irq12
| | | | | | | | | | 24 cache 7133 em3 irq10
=+++++++++++ 645400 free 1 ata0 irq14
daefr 1355 mux irq11
Namei Name-cache Dir-cache prcfr fdc0 irq6
Calls hits % hits % react 1000 clk irq0
pdwak 128 rtc irq8
zfod pdpgs 3 mux irq5
Disks ad0 ofod intrn 7093 mux irq12
KB/t 2.00 %slo-z 113712 buf
tps 1 tfree 3 dirtybuf
MB/s 0.00 69954 desiredvnodes
% busy 0 59766 numvnodes
26 freevnodes

Note the long line of “+” signs. Those are IRQ interrupts that the CPU has to handle. At the time this screen scrape was taken, over 20% of the CPU was spent handling IRQ requests from device em3 (right column 7153 IRQ requests).

Now the same exact server with the same exact data flow:


1 users Load 0.00 0.00 0.00 Aug 29 22:55

Mem:KB REAL VIRTUAL VN PAGER SWAP PAGER
Tot Share Tot Share Free in out in out
Act 17504 4784 26020 5304 645424 count
All 375240 5984 2742020 7080 pages
Interrupts
Proc:r p d s w Csw Trp Sys Int Sof Flt cow 1124 total
8 9 1 50 1124 4 3 103724 wire stray irq7
16408 act em1 irq5
3.1%Sys 0.8%Intr 0.0%User 0.0%Nice 96.2%Idl 255084 inact em2 irq12
| | | | | | | | | | 24 cache em3 irq10
== 645400 free ata0 irq14
daefr mux irq11
Namei Name-cache Dir-cache prcfr fdc0 irq6
Calls hits % hits % react 996 clk irq0
pdwak 128 rtc irq8
zfod pdpgs mux irq5
Disks ad0 ofod intrn mux irq12
KB/t 0.00 %slo-z 113712 buf
tps 0 tfree 4 dirtybuf
MB/s 0.00 69954 desiredvnodes
% busy 0 59766 numvnodes
26 freevnodes

Now note the difference. The CPU is now free to handle other things instead of IRQ requests. That is the power of device polling.

Here is how you set it up.

First you have to make sure that the network interface you are using supports device polling. The author’s web site linked above lists support for “fxp” (Intel 10/100 cards), “sis” (SiS based network cards – not sure which), and “dc” (Some sort of DEC card). One card which is not listed on that page, but I know supports device polling is the “em” card (Intel Gigabit cards). If there are other cards that support device polling please let me know. I have only used fxp and em cards so I know for a fact that they work.

So now that we know we have a network interface that is going to support polling (it is really the device driver that supports polling) we can get started on adding the support to the kernel. This is really simple and is comprised of two directive in the kernel config file:

options DEVICE_POLLING

options HZ=1000

Once you have those in your kernel config, you can recompile your kernel and boot it up. If you need help doing that part consult the FreeBSD How-To for creating a custom kernel (and you are running a custom kernel with FreeBSD, right?).

Now you have a computer that has a network card that supports device polling, a kernel that is device polling enabled as well, a real time clock that is more responsive (the HZ=1000 line), and the kernel is booted and we are ready to rock and roll. Now we just need to issue:

sysctl kern.polling.enable=1

Now we have it enabled and you are all set to go forth into that dark night and let your devices be polled.

I hope that sheds some light on a subject that can be a of great use for high network traffic servers (routers, firewalls, proxies, etc).

Let me know if you found it useful.

Keeping Your Server Up To Date Easily

Many distros have easy ways to keep the OS up to date with security fixes and patches. Probably the easest to use of this group is apt-get. Apt-get originally started off with the Debian distribution where it is responsible for not only updating software, but also install and removing software easily.

Additionally… Some one has ported apt to the Fedora/Redhat distribution where it (in my opinion) blows away any other implementation (yum, up2date, etc).

I install apt on all my Fedora machines and use its features to find, install and keep up to date the server’s installed software.

Probably the biggest problem with Fedora these days is that Fedora’s OS is phased out very quickly. What do you do if you have Fedora Core 1 or Core 2 installed on your server, and the Fedora project has (as of this posting) moved on to other releases. Enter apt, and the Fedora Legacy group. Using Fedora Legacy you can keep your OS up to date with recent patches using their apt repository. You can read more about their repositories here.

You can setup apt under Fedora and point apt to use Fedora Legacy’s apt repository and keep your server up to date quickly and easily.

Here are some handy commands to run with apt:

apt-get update

This command is to update apt and its sources list to ensure that it is looking for the correct version of files to download and install. As a general rule you should run this at least once before using apt.

apt-get install {name of package}

This command will go out to the respoitory and fetch the latest version of the package you specify. eg: apt-get install httpd will install the latest version of apache if it is not installed yet.

apt-get remove {name of package}

This command will uninstall the software package you specify. eg: apt-get remove httpd would uninstall the httpd package if it was installed.

apt-cache search {name of package}

This command is extremely handy if you can’t find the name of the package you are looking for. Sometimes, especially with lib packages, the names can very greatly. apt-cache search php would show you a listing of all the package names that contain php or contain php in the description.

apt-get upgrade

This command is the best command of all. It will look at your server and the list of packages installed, and then it will look at the repository to determine which packages you have installed have been updated. It will then download and install those packages. Very handy. I run this once a week for a server.

Some packages are excluded from being downloaded. Of those is the kernel. On some servers you don’t want to upgrade the kernel if you don’t have to, but sometimes you want to do that because of local user kernel exploits. To get a list of available kernels to install you can run:

apt-get install kernel

This will output a list of kernels that are available and you can select a name from the list to force it to be installed.

I should note that installing a kernel can be risky business, especially when you are not local to the server. Having said that, I have been very lucky with installing kernels from apt-get.

I hope this information shows you that maintaining a server and its software isn’t as hard as you think it is. It just takes a little setup and the right tools. Using apt-get and Fedora Legacy’s apt repositories, you can keep your server safe and secure.

Tar Over SSH

Recently I did an entry on SCP (Secure CoPy) which uses SSH to copy a single file over a secure ssh tunnel to a remote server or to copy a remote file to a local directory. This works great for a single file, but what if you want to do an entire directory?

Well one way is to tar up the directory, then copy the file to the remote server (using scp perhaps?) and then login to the remote server via SSH (you aren’t using telnet any more right?) and then untar the file on the remote side. Pretty simple, but since we are geeks, we try to do things as efficiently as possible (even if there are better solutions).

Enter tar over SSH.

Tar has the great ability to send data to stdout/stdin using the “-” (dash) as a filename in the command line. So using that we can string together pipes to send the data to a remote server. Let’s explore how:

tar -zcf – ./ | ssh remoteuser@remotehost tar -C /path/to/remote/dir -zxf –

What this does is pretty simple: it creates a compressed tar file of the current directory (./) and sends it to stdout (-). We catch stdout with the pipe (|) and then call ssh to connect to a remote server where we execute the remote command tar. Using the remote tar we change directory to /path/to/remote/dir and then run a decompress extraction of stdin (-).

About the only caveat of this method is that tar must be in the “remoteuser’s” path, otherwise you would have to specify the fully qualified path to the binary for tar.

Great way to transfer a bunch of files securely, as well maintain ownership and permissions.

Try and come up with some other uses, and post them in the comments.

Web Development Tools

Today’s blog post is going to be a little bit off topic for me… Still something that I deal with… but not 100% related to hosting in general…

Web development tools.

I do development in PHP. It helps pay the bills, and keeps me busy, plus in some weird sort of way it gives me a creative outlet.

Recently I had to build a rather large intranet/internet application for a gymnastics company (you know the kind where you take your kids there and the teach them how to tumble and not break their necks while doing so). Anyways… One of the design criteria that was imposed was to make sure that the site was standards compliant, with the client specifically outlining XHTML compliant.

So I setup the site with templates (since I am a PHP programmer and I know it well, I used Smarty). Making a site that is standards compliant as well as CSS enabled is not easy, as I quickly found out. CSS coding is an art form, because not only do you have to apply the CSS to the site and make sure it works as you intended, but you also have to check it in other browsers to make sure it renders properly there as well (a particular area that IE fails miserably at).

Enter one of the most useful extensions for the FireFox browser to date, Web Developer. To say that Web Developer, or webdev, is good would demonstrate a textbook definition of understatement. Flat out… IT ROCKS.

Probably the feature I use the most is the “View Style Information” feature that allows you to select any CSS element on a rendered page in your browser (regular web page) and see the CSS method behind it. This is such a great feature that it is the primary reason I started using webdev in the first place. About 8 months ago I had a page that I couldn’t figure out how to get to render how I wanted. I must have played with the style sheet for 3 hours before a friend recommended I try this FireFox extension. Within 10 minutes I had figured out the stylesheet quirk and fixed it. Amazing.

But it has other features too… It can outline block level elements (divs, p, etc..), tables, and table cells. It has built in menu items for CSS and HTML validation (using the W3 validator websites). It can display passwords hidden in input/password fields (you can view the source sure, but this is so much easier). You can automatically populate form fields for testing. You can even clear HTTP based authentication, session cookies… It is like the swiss arm knife for web development (complete with the little toothpick).

Another great extension that I use for web development is ColorZilla. It is another extension that sits in the status bar of FireFox and has an eye dropper that you can use to sample any color on the page. It doesn’t matter if it is a picture, a CSS style element, text, whatever… You can get the hex code for the color as well as RGB code for it. I think I use it almost as much as I use the webdev extension.

So there you go… Two of the most useful FireFox extensions for web developers. Load them into FireFox today and start basking in the glow.