Sunday, July 20, 2008

Inside AppOS: Creating a Hardware Profile

Unlike other Linux based solutions, AppOS ships for specific hardware profiles rather than a generic hardware architecture. Keeping with the "only what is absolutely needed" approach of AppOS, each hardware profile supports a number of different "feature sets". How do you figure out what profile you need? If you hardware is not supported, we provide a kernel build kit, I will post more about that later in the week. Figuring out what profile you need, or creating your own profile is pretty easy.

AppOS can upgrade any Linux distribution by adding a kernel and entry to your boot loader (Grub, LILO etc). On an existing Linux system, finding out the hardware is pretty simple. We recommend the use of the open source project lshw.

[root@foo]# wget

Always check first, make sure that the release hasn't been upgraded, especially if wget can't get the file. Next simply untar it with tar zxvf lshw-B.02.13.tar.gz; cd lshw-B.02.13. If you don't already have gcc-c++ installed, on Fedora based systems you will need to run yum install gcc-c++.

Simply run make, then cd src. In there you will find a lshw executable (assuming your environment is ok). There are pre-built binaries on if you need them. Using lshw to build a profile is pretty simple:

[root@foo]# lshw > profile.appos
[root@foo]# cat profile.appos | grep driver

This will produce a list of drivers that lshw found you had loaded, you can typically ignore the sound drivers, unless you have some sort of specific reason you need them active on your server. In our case, it found several, you don't need to worry if you see a driver listed more than once.


So after sanitizing the output from lshw, we have the valuable information we need. You can check the list of hardware profiles when downloading AppOS, typically you will find what you need. In the event you need to compile, we profile a downloadable kernel kit, where all you have to do is run make menuconfig ; make.

The above hardware is a typical Intel based system, PCIe and AGP are Intel, its got the PIIX ATA driver, Intel E100 network driver, the Intel i801 smbus and USB UCHI.

The only other piece of information you need to know is how many CPU cores you have, a quick command:

[root@foo]# cat /proc/cpuinfo | grep model | grep name

model name : Intel(R) Pentium(R) D CPU 2.80GHz
model name : Intel(R) Pentium(R) D CPU 2.80GHz

Here we can see its got multiple processors or multiple cores, either way we don't care, we just know it needs SMP support.

After building the new kernel, or selecting one for download. Its placed in /boot, /boot/grub/grub.conf is updated (or /etc/lilo.conf). For lilo you'll need to run lilo to install, otherwise, grub will pickup the changes. There is no need for an append line or an initrd line. The AppOS kernel build has a built-in compressed ramdisk image. You can use the append line to configure the system on first boot, more on that later.

Common Sense: Disabling Linux Kernel Modules

Linux kernel modules are great for development and workstation environments, but do they actually make sense for servers or appliances? The quick answer to that is not really. When you factor in that having loadable kernel module support provides a potential attack vector into the heart of your system, you quickly begin to realize that the risk far outweighs the benefits.

Aside from the development advantages of loadable kernel modules, the only other key advantage is possibly saving space. Kernel modules indeed save space when they are not loaded. However, I can't come up with a single module that I'd have on a server that I would have unloaded. You don't really need the development advantages on a production server.

The security risk though is considerably higher when you run with kernel module support enabled. If someone compromises your system, gains local root access, all they need to do is insmod something malicious into your kernel, and then you might not even know its been compromised.

Loadable kernel modules do provide a generic way for Linux distributions to ship a one-size fits most solution. Most competent admins will end up recompiling the stock kernel anyway. So why run something heavily loaded, when all you really need are a minimal set of features? The more features you add to a system, the great the number of possible attack vectors and vulnerable code there is.

Have some common sense, disable your loadable kernel module support, and optimize your Linux kernel!

Saturday, March 1, 2008

AppStacks - one stack, many possibilities

This weekend we wrapped up testing of our "Appliance Stacks" under AppOS 4.0, and started beta testing those same stacks under MacOS X 10.5. Appliance Stacks or "AppStacks" as we call them, are a self contained image that provides a secure and optimized stack. AppStacks can be run from within any existing Linux operating system, but require AppOS for some of enhanced security features.

Well we have just completed our QA process of AppOS running within existing virtualized environments, such as VMware ESX and Parallels. This will allow ISVs to develop for a single platform (AppOS) and still support legacy Linux platforms. While for maximum security, we recommend the use of AppOS natively on a server, Spliced Networks is about providing a choice to the community.

With AppOS there is no steep learning curve, no need to learn a completely new packaging system, and the solution makes it simple to QA the resulting product. You just have to ./configure and go! Something practically *EVERY* open source developer out there knows how to do.

rPath's calculator shows benefits do not scale

Earlier this week rPath announced a "cost savings benefit" calculator. I thought I would take a look. After plugging in some generic values for costs, I took a look at exactly what savings you can expect. If you currently support just one operating system, such as Red Hat Enterprise Linux, there are no R&D and no additional revenue gains at all. According to rPath's own calculator, there are *NO* R&D benefits from just one OS. I found this interesting, because rPath on many occasions have indicated how much of a time savings benefit it is to use Conary. Now their calculator looks like its back tracking on that?

Their calculator shows a static 40% cost savings benefit on support. Whats interesting is that according to their calculator, the benefits of rPath do not scale beyond 8 support operating systems. So if you need to QA lets say 10 operating systems, there is no additional cost savings benefits.

This calculator is very questionable, it provides some nice numbers, but there is no explanation of the savings. Apparently, if you use rPath their calculator is claiming 15% or 16% increase in revenue. Perhaps it prints money? Its unrealistic, and doesn't appear to take into account the pricing program that rPath pushes on its customers.

It doesn't seem to take into account that real ISVs have to support legacy customers, so at any point in time, you might be supporting RHEL 4.x and 5.x, Fedora Core 6, 7 and 8, CentOS 4.x and 5.x, SuSE Enterprise, OpenSolaris, Solaris, Ubuntu Server, Gentoo and Debian.

Thursday, February 28, 2008

Virtualization == Security FUD starts to unravel

If you have ever had the opportunity to listen to VMware's marketing folks you'll have heard the crazy FUD that Virtualization by itself offers you a degree of added security. This is complete nonsense, so the guest VM is just as vulnerable as a system not running a virtual machine. You still have to secure it, and virtualization really only offers some kernel level separation between applications. If you are looking for application partitioning type security, you can get it with AppOS without incurring the overhead of virtualization.

Today though, the risks of having all your eggs in one virtualized basket are starting to be seen. The folks at Core Security issued this advisory along with C code for an exploit on how to access the Host system from within a Guest VM! As virtualization starts to get scrutinized more, I wonder how long it will be before VMware's virtual switch technology in ESX starts to show signs of vulnerabilities too! As a virtual layer 2 switch, it is likely subject to the same security problems physical layer 2 switches are.

Tuesday, February 26, 2008

Windows Server 2008 Core == Lame!

With Microsoft Windows Server 2008 actually coming out this week, I thought I would take a quick look at their offering. I had heard about the GUI-less Windows Server 2008 and thought maybe Microsoft had finally got their act together. Could Microsoft finally have some real competition for Linux on the server side?

Well the short answer is no. Microsoft Windows Server 2008 actually still has a GUI, in fact its not just a GUI, but something based on Windows Vista. Short of being seriously drunk or seriously stupid, putting anything based off Vista on a server is a flat out bad idea. Microsoft are rolling out Windows Server 2008 in the usual multiple flavors - Enterprise, Datacenter, and so on. The only version that offers the "GUI-less" version is Windows Server 2008 Core.

So when folks start saying Windows Server 2008 Core is competition for Linux, you can now officially just laugh! I was expecting something maybe interesting, like 64-bit DOS with advanced networking and filesystem capabilities. What do we get? We get the GUI, but instead of the explorer stuff with the task bar, start menu and other things. Your default shell is the command prompt. Yes folks, you read that right. All Microsoft has really done is stripped out the GUI tools and other things like .NET from the release, changed the default shell and added some command line utilities for you to get the job done.

Microsoft have made it so confusing that even their own pundits and experts are having a hard time doing basic configuration tasks such as setting up the hostname - click here to see an example on YouTube.

So if you need any of the key functionality in Windows Server 2008, such as .NET, you basically can't use core. Core is a very lame attempt at trying to say they have a CLI. Sure they have a CLI, but this would be like me starting X and loading xterm as the window manager! I'm still using tons of resources for the GUI.

So Windows Server 2008 still has the GUI, sure it has a "GUI-Lite" version thats got limited functionality, but this is no match for Linux. Windows Server 2008 looks like yet another flop from Microsoft. Microsoft shouldn't worry about Open Source, looks like they are taking themselves out between this and their efforts with Windows Vista!

Tuesday, February 12, 2008

AppOS not vulnerable to local root exploit

This week started off with this local root exploit in Linux. Today we saw some patches from rPath, whose Linux distribution was vulnerable, like any other Linux system running 2.6.17 and later. Those customers have been vulnerable to this attack, which could potentially be deployed remotely through an insecure service running on the system, there are many different ways that this could easily be turned into a remote attack. Even something as simple as weak passwords on a customer account. This might be okay for your box at home or that server in the lab that has no Internet access. Requiring an upgrade and then a reboot, resulting in downtime to fix this is a serious matter for a business.

While AppOS, was running the vulnerable kernel, the exploit could not be used against AppOS thanks to the security mechanisms built into AppOS. Maybe I should refer to them as the severely paranoid security mechanisms. In fact, there was no way for a remote user to even execute the exploit even if they had accessed a local users account, as it could not be written to the system providing the services, thanks to the unique approach to chroot jails that AppOS uses. Our customers enjoyed the comfort of our zero day attack protection, the kernel still has exploitable code, which is fixed with an AppOS update image. However, the severity is low, and not critical like it is with our competitors solutions. Our customers can upgrade during their maintenance window, at their own leisure.

A better solution..

Sales and Marketing people will sell you anything that moves, if you're paying, they're selling. They don't care if its the right solution, they don't even care if it does what you think it should do, they just want your money and make the sale. Companies don't keep using their products because its the best product, they keep using the product because they spent too much money on it and don't want to admit to their boss that it was a bad decision. I'm not too fond of technology marketing people!

Spliced Networks is a company built by engineers. Our mission statement is simple and accurate - "Build innovative and secure solutions for the Enterprise Network..", in other words a better solution. We won't sell you anything unless we believe it is the most innovative and most secure solution you can buy today. If its missing something you need, we'll create it and on many occasions, build you something even better.

Spliced Networks is dedicated to building faster, more secure and more innovative server and network appliance solutions. You won't find us mucking about with X-Windows, KDE or Gnome. The fact that we don't care about X-Windows or need to support it, enables us to offer far superior security solutions that other vendors have to sweep under the rug.

AppOS 4.0.0 is nearing FCS, when its released, servers will never be the same again!

SquashFS with LZMA integrated into AppOS 4.0

LZMA is one of the best compression algorithms out there. SquashFS, as we've known for years is one of the best compressed filesystems you can get for Linux. As well as its security side effect benefits that we use with AppOS. We have been looking at SquashFS w/LZMA and have decided to integrate it into AppOS 4.0. SquashFS w/LZMA offers about 20MB/sec transfer rates on decryption, and so there is no performance impact with using it in AppOS. However, its looking to offer a 10% improvement over regular gzip'd based SquashFS.

You can get a copy of SquashFS with LZMA from here.

Tuesday, February 5, 2008

Spliced Networks adds 100MBit/sec in Chicago

We are very pleased to announce that we have added 100MBit/sec of bandwidth and servers in Chicago. We expect the new addition to go into production by the weekend. This move wraps up Phase II of our network expansion. Chicago is a key location, as it fills a void, prior to this the mid-west was served by either Houston, Atlanta or Philadelphia. The bandwidth to our headquarters in Athens also terminates in Chicago, so this move enables us to provide fast access to additional services and equipment for our partners and customers.