I just got back from a quick trip to LinuxWorld, where I discovered that the vendors are still *gasp* completely clueless. They’re stuck in this “The Server is Precious” mindset, which prevents them from understanding the real value in Linux. Linux isn’t an operating system, it’s a framework. Datacenter computing isn’t a collection of random boxes, it’s a compute grid. Of course, for this to all make sense, software must be built with the datacenter in mind. Good IT shops understand this, and bad ones go to LinuxWorld and suck up Avocent bullsh!t about remote server management.
“The Server is Precious”:
This old-school mentality basically entails a staff of administrators who lovingly hand-craft every server for it’s new purpose. The server performs this purpose for some period of time, then fails spectacularly. The admins profusely apologize for this failure, sell their management on a cluster (-f_ck) at five times the price of the original server. The cluster, being less reliable than the original server, proceeds to fail in a more subtle, less recoverable way, taking the service down again. The admins, who probably understand something about grid computing, refuse to bring it up to their management, since they’re afraid of change, and Server TLC is their only modus operandi. So, the company continues to hunt around for a better set of crutches for their poorly designed, highly persistent application. They’ll probably go to windows server, sucking up the microsoft crap about five-nines.
Linux is a Framework:
Okay, so now that it’s clear that you shouldn’t think of servers as these little hand-crafted beasties, how do we move on? The key is to think of Linux as an application framework rather than a monolithic operating system. A monolithic operating system is generally hand-installed, comes with every imagineable feature, and must be upgraded by hand every few years. A framework, however, is simply the collection of open source software that surrounds the business application. This framework is the minimum set of libraries and tools required to operate and maintain the application in production. Each application should have it’s own framework. Keep it simple.
Datacenter Grid Computing:
Every system in your datacenter should be viewed as a meaningless, stateless blob of compute capacity. Think of it as a big CPU hopper. The Infrastructure team is responsible for throwing the lowest-price, least stinky CPU manure into the top of the hopper. Their only job is to keep costs low, and the hopper full enough to satisfy demand. They need to understand (and be measured by) the cost-per-pound of said CPU, and they need to charge each service for the capacity that they consume. Think of it this way: you need to buy processors and DIMMs. Anything else is overhead. Some of it (like a mainboard) is probably necessary. But why do you need a discrete power supply? Why a discrete chassis? What the hell do you need remote floppy redirect over a java applet for, again?
Make the Software an Appliance:
Ah! Now we’re getting somewhere. Once you have a big hopper of CPU cycles, you can simply instantiate the software on this resource as needed. Need more? Take it from the hopper on a just-in-time basis. Done with your resources? Back in the hopper. Suddenly, things like a complete, mangled (ahem, managed) operating system become superfluous. Since you built a framework of your minimum set of requirements around your application, you can just toss that blob into the compute farm and execute it to your heart’s content.
Rubber, meet Road:
Okay, this is all good to talk about. In fact, lots of people are talking this way. So, who gets it… and who is completely missing the boat?
On the Boat:
- Amazon Web Services
- RedHat’s Xen Platform
- Rackable Systems
- Dell Cloud Computing
- Silicon Mechanics
Still on the Dock:
Am I crazy? Post a comment, let me know. Who else is still on the dock? Who is the captain of the boat?
I’ve been running m0n0wall on my Soekris 4801 for awhile now. I’ve decided that I want to get a little more control over my firewall and move back to Linux. So, I installed WhiteBox Linux on the 4801 and built a set of IPtables firewall rules.
- Soekris Net 4801 firewall
- 1GB CF Card
- Whitebox Linux 3.0
- A working PXELinux environment
- Null modem cable
- Configure whitebox linux for a PXE based installation
- export the installation tree via NFS or HTTP from a server on your LAN
- copy vmlinuz and initrd.img from the images/pxeboot/ directory to /tftpboot/ on your DHCP/PXE server
- append this entry into your pxelinux config file:
append initrd=/initrd root=/dev/ram0 console=ttyS0,9600n8
- Install the 1GB CF card into the Net4801’s CF slot. This will appear as an IDE drive “hdb” to the OS. The RHEL3 install consumes about 570MB, so 1GB is the minimum card size that can be used for this.
- Connect the soekris’ eth0 port to your LAN
- Connect the null modem between the soekris and a PC with a terminal emulator
- Open the terminal emulator and configure it for 9600/N/8/1, no flow control
- Apply power to the Soekris
- When prompted, hit CTRL-P to enter the soekris’ boot menu
- To PXE boot, type boot F0 and hit ENTER
- If your syslinux environment is properly configured, you should see a pxelinux prompt
- At the pxelinux prompt, type install-wb3 to begin the RHEL3 installation
- proceed through the RHEL3 installation…
- you must use GRUB, not LILO
- configure the partitions by hand, with one 1GB / partition
- do not create a swap partition, it’s bad to swap to flashdisk
- select a “custom” installation
- unselect all the package groups to get down to a 570MB installation
- when the system reboots, you’ll notice that the grub bootloader is pretty messy on the soekris’ serial console. to fix this:
- edit /boot/grub/grub.conf using your favorite editor
- change the ‘terminal’ line to look like this:
terminal –timeout=10 –dumb serial
- use chkconfig to disable any unneccesary services to conserve memory
- reboot to test your new config
By default, on a PC, Linux sends all bootup and login text to the VGA monitor that you may or may not have attached. It also expects all input to come from the keyboard. This is fine if you have one linux server, and it’s sitting right in front of you. However, if you need to administrate remotely, VGA is a really tough way to do it.
The solution? Linux can output both kernel and login messages to the serial port on a machine, which can then be connected to either another linux box or a dedicated terminal server.
There are 3 steps to make a Linux server output all data to a serial line. This was tested under redhat-7.2. Small changes may need to be made for other distributions.
- Setup LILO to instruct the kernel to use the serial port.
The kernel needs to be told to send all it’s bootup output to the serial port. This can be done by adding a single line to the /etc/lilo.conf line, and running /sbin/lilo to implement the changes.
This tells the kernel to send the console to ttyS0 (the first serial port) at the speed 9600 baud, no stop bits, 8 data bits. Set your terminal to those settings too.
You can also put the LILO prompt itself on the serial port if you’d like. You can add the following line to the header section of the /etc/lilo.conf file:
- Give yourself a login prompt
Once the kernel has booted, though, you need to tell the init process to spawn a login shell on the first serial port. I added the following line to the /etc/inittab file:
S0:2345:respawn:/sbin/agetty -L ttyS0 9600 vt100
This tells init to spawn the agetty process. We tell agetty to listen on ttyS0, the first serial port, at 9600 baud, and to assume a terminal type of vt100. The -L flag tells agetty that this is a direct line, not a modem.
- Allow root logins on the serial port
Okay, so now (after a reboot) we have the kernel messages and a login prompt on the serial port. However, it’s not letting us login as root. We need to tell login to allow root logins on the serial port. Add the line ttyS0 to the file /etc/securetty to tell login that ttyS0 is a secure login facility, and to allow root on that line.
You can now use a serial crossover cable to connect another linux box to COM1 on your server. Using minicom (or the terminal app of your choice) set to 9600 baud, no stop bits, 8 data bits, you’ll be able to watch your server boot up and login as root. This is great for remote administration and debugging, such as working on network problems that have prevented you from logging in normally.
Using linux as a home firewall must be the most common use of linux in the home, and is one of the ways that many people get started with linux. A linux home firewall will run on just about any old PC hardware, so long as you can install 2 network cards in it. Firewalling for the average home user requires very little processor power — a 486 will work just fine, although administration might be a bit sluggish.
All you need for a home firewall is a simple linux installation (I use redhat) to start off with. You’ll need to configure both of your network cards: eth0 with the IP address that your provider assigned to you, and eth1 as 192.168.0.1/24. The firewall consists of 2 simple pieces: keeping people out and allowing your connections through.
Step 1: Keeping people out using state tracking
State tracking allows you to only allow valid connections, identified by the correct packets originating from the correct places. Any incoming packet that is not associated with a connection that you originated will be dropped.
The first thing to setup is a new “chain” that we will use for both INPUT and FORWARD categories of packets. This can be done with the following commands. The first command sets up a new chain called “block”. The second command allows any state-tracked packet that is for an established connection to flow through. The third command matches anything else and drops the packet.
/sbin/iptables -N block
/sbin/iptables -A block -m state –state ESTABLISHED,RELATED -j ACCEPT
/sbin/iptables -A block -j DROP
We then want to add rules for our external interfaces to jump to this block table. We’ll assume that eth1 is our external interface, and eth0 is our internal (trusted) interface. We’ll also add a rule for the loopback interface, since many applications that you may have on your server will need that.
/sbin/iptables -A INPUT -i eth0 -j ACCEPT
/sbin/iptables -A INPUT -i lo -j ACCEPT
/sbin/iptables -A INPUT -j block
/sbin/iptables -A FORWARD -i eth0 -j ACCEPT
/sbin/iptables -A FORWARD -i lo -j ACCEPT
/sbin/iptables -A FORWARD -j block
At this point, you should be able to get out to the internet from the linux box that you setup, but you’re not yet translating traffic from your home network to use this new internet connection. This brings us to step 2.
Step 2: Network Address Translation
We’re going to use “fake” 192.168.0.* addresses on the internal network, which will allow up to 253 workstations behind your firewall. If you’re running more than that, this range can be increased, but, well, if you have more than 253 workstations behind your home firewall, you’re doing something pretty special . We’re going to use the real IP address of 18.104.22.168 as the IP address of your linux box — substitute your ip address in there.
We’re going to use a technology called “SNAT”, which stands for source-address network address translation. This basically means that the firewall is going to translate the fake address of workstations behind the firewall to it’s own address, which is a valid internet IP address. This is really just a simple command:
/sbin/iptables -A POSTROUTING -t nat -s 192.168.0.0/24 -j SNAT –to-source 22.214.171.124
At this point, we’re ready to configure a workstation. For small networks, just configure your workstations by hand. If you have more than a few workstations, then we can use DHCP to automagically assign IP addresses to them. That’ll be covered in a seperate article.
For now, setup a workstation with the IP address of 192.168.0.10, a subnet mask of 255.255.255.0, and a default gateway of 192.168.0.1. Use whatever DNS servers were assigned to you by your ISP. Once that machine is configured, you should be able to browse the internet from that machine.
We can then check our config with the command:
/sbin/iptables -L -n -v
Which will give a verbose description of the rules that we have running. If all is working, RedHat gives us an easy way to save our active config:
So that our rules will still be around if we have to reboot our firewall for some reason.