Archive for the ‘File Systems’ Category

Rebuilding Zeus – Part I.5: Change of heart, change of hardware.

October 14th, 2009 No comments

After a bit of digging around, my original spec’d hardware I’ve decided is too much for a boxen that will be on 24×7, especially with the rates for electricity going up next year – every little Watt counts. The existing 65W CPU isn’t ideal, instead I’m opting for a 45W CPU instead and this means – looking at the lineup, its going to be a walk down AMD way. Less watts, less heat and less noise, noice! See AMD’s product roadmap for 2010-2011.

The original specifications I mentioned were:

I’ve decided to change the CPU and Motherboard but keep the other bits and bobs – I could loose the graphics card and go onboard but I felt like leaving it there for now. The target budget is $250 maximum for both CPU+Mobo, so this means I’m sticking with DDR2 which implies AM2+ but it must also satisfy:

  • CPU has to be 45W and be atleast 1.6Ghz, dual core no more, has to support Virtualization.
  • Motherboard has to Support 8Gb (most boards doo!),  have atleast 2x  PCIe and a PCI slot, it would be nice if the network cards work (gigabit) but no fuss if it doesnt. No crazy shebangabang Wifi, remotes etc bling and if it has onboard Video great, otherwise its OK to use a crappy card.

I picked the AMD Athlon X2 5050e CPU because it was cheap (~$80), supports a 45W, has virtualisation and is an AM2. Next was the motherboard, looking at the ASUS, Gigabyte & XFX models as my target.

Chipset wise only the following fit the criteria for a possible match because others just don’t have the number of SATA ports available onboard. Primarily AMD boards are supplied by NVIDIA or AMD themselves.

Initially I looked at the ASUS  boards (they’ve been nothing but rock solid for me in the past) but after a lot of research scouring through the manufacturer sites I ended up picking out the Gigabyte GA-MA790X-UD4P which is based on the AMD 790X Chipset. The board came with 8x SATA Ports, 3x PCIe and 2x PCI and a  Gigabit NIC all for a $137 from PCCaseGear. Not only was the power consumption lowered but the noise and heat generated was substantially lower too!

Coming in close was the ASUS M4N78 PRO or the ASUS M4A78 PRO, each of those unfortunately didn’t have as many SATA ports (2-less) nor the PCIe ports (1-less).

{lang: 'en-GB'}

Part I: Rebuilding ZEUS, the journey of training the next home server

October 6th, 2009 No comments

I’ve been looking at upgrading our existing home server from the archaic (and unsupported!) Ubuntu Gutsy (because I was feeling gutsy at the time) to something newer, fresher and that will last me atleast another 2 years. This is purely for my documentation.

Current Setup

Currently running an AMD setup with Ubuntu Gutsy (7.10) – I didn’t think it would last this long, honest! Ubuntu 6.06 had too many issues with the hardware/driver incompatibilities.


On an ASUS A8N-SLI Deluxe motherboard (because you know, servers need SLI!) sporting a AMD Athlon64 3200+ (the only AMD CPU at home!) with 2Gb of RAM (hey, DDR1 wasn’t cheap enough!)


00:00.0 Memory controller: nVidia Corporation CK804 Memory Controller (rev a3)
00:01.0 ISA bridge: nVidia Corporation CK804 ISA Bridge (rev f3)
00:01.1 SMBus: nVidia Corporation CK804 SMBus (rev a2)
00:02.0 USB Controller: nVidia Corporation CK804 USB Controller (rev a2)
00:02.1 USB Controller: nVidia Corporation CK804 USB Controller (rev a3)
00:04.0 Multimedia audio controller: nVidia Corporation CK804 AC'97 Audio Controller (rev a2)
00:06.0 IDE interface: nVidia Corporation CK804 IDE (rev f2)
00:07.0 IDE interface: nVidia Corporation CK804 Serial ATA Controller (rev f3)
00:08.0 IDE interface: nVidia Corporation CK804 Serial ATA Controller (rev f3)
00:09.0 PCI bridge: nVidia Corporation CK804 PCI Bridge (rev f2)
00:0a.0 Bridge: nVidia Corporation CK804 Ethernet Controller (rev f3)
00:0b.0 PCI bridge: nVidia Corporation CK804 PCIE Bridge (rev f3)
00:0c.0 PCI bridge: nVidia Corporation CK804 PCIE Bridge (rev f3)
00:0d.0 PCI bridge: nVidia Corporation CK804 PCIE Bridge (rev f3)
00:0e.0 PCI bridge: nVidia Corporation CK804 PCIE Bridge (rev a3)
00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration
00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map
00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller
00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control
01:00.0 VGA compatible controller: nVidia Corporation G70 [GeForce 7300 GT] (rev a1)
05:06.0 RAID bus controller: Silicon Image, Inc. SiI 3114 [SATALink/SATARaid] Serial ATA Controller (rev 02)
05:07.0 RAID bus controller: Silicon Image, Inc. Adaptec AAR-1210SA SATA HostRAID Controller (rev 02)
05:0a.0 RAID bus controller: Silicon Image, Inc. SiI 3114 [SATALink/SATARaid] Serial ATA Controller (rev 02)
05:0b.0 FireWire (IEEE 1394): Texas Instruments TSB43AB22/A IEEE-1394a-2000 Controller (PHY/Link)
05:0c.0 Ethernet controller: Marvell Technology Group Ltd. 88E8001 Gigabit Ethernet Controller (rev 13)


processor       : 0
vendor_id       : AuthenticAMD
cpu family      : 15
model           : 47
model name      : AMD Athlon(tm) 64 Processor 3200+
stepping        : 2
cpu MHz         : 1000.000
cache size      : 512 KB
fdiv_bug        : no
hlt_bug         : no
f00f_bug        : no
coma_bug        : no
fpu             : yes
fpu_exception   : yes
cpuid level     : 1
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt lm 3dnowext 3dnow up pni lahf_lm ts fid vid ttp tm stc
bogomips        : 2011.59
clflush size    : 64

This faithful boxen has been the primary source of our fileserver (XFS+LVM 3Tb) – used internal to our house and also by others who upload their stuff to be backed up. Subversion repositories, Apache/LightHttpd test servers for PHP work, Virtualisation for Windows 2003, 2000 and SqlServers running for testing and several other things (think: TeamCity, Continous Integration tools, Confluence etc). Its also been damn convenient when your at work or on holidays to be able to login, muse about via SSH and even fix things remotely.

Needs & Wants

The new server will need to fufil the following roles:

  • Function as a NAS to continue to offer backup (via users home directories) and storage options
    • No file-system constraints asside from no Ext3 or ReiserFS.
  • Offer the ability to still run Virtual Machines, need to virtualise CentOS, Ubuntu and Windows for testing, they’ll be running in  Bridged mode
  • No real need for a Gui (I can consider myself a little l33t than a few years ago)
  • Run a Subversion repository (not that hard!)

The idea is to have a bare bones operating system install and have the virtual machines handle the hard and ugly work – webservers to test things, servers to try development deployments (java) and other bits and pieces. The core OS just has to manage the NAS and allow the ability to SSH in to offer subversion access.


The hardware I’ve picked from things I had around the place, the only thing I’ve bought is just new sticks of RAM.

  • Motherboard: ASUS P5QL-PRO
    This board offered some excellent specifications via the P43 chipset, the things I looked for was the number of SATA ports ‘out of the box’ – 6 native SATA2, the number of 1x PCIe slots (2!) for future addions of PCIe SATA adapters and the maximum amount of memory possible (8Gb). Oh ofcourse, something cheapy and that can run the CPU I had around. A Gigabit NIC was also important (dual would be better!) but if it wasn’t supported I had a trusty Intel PRO 1000MT Server PCI cards to fill the void – almost everything supports them (e1000)!
  • CPU: Intel Core-2 E6750 – 2.66Ghz (65W TDP, VT)
    Importance was Intel-VT support, low TDP and a dualcore thats not too high.
  • RAM: Corsair TWIN2X4096-6400C5 (4Gb kit x 2 = 8Gb)
    Cheapy cheapy, twice the fun of a regular kit, slightly higher CAS, but who CAreS this isnt being overclocked.
  • Graphics: ASUS 9400GT PCI-Express
    The cheapest graphics card to be found at the legendary& award winning computer store MSY Technologies. Depending on how the drivers go (I’m usually biased towards ATI for all Linuxes) I might endup paying for a ATi card later.

Next up the investigation, be warned though I started this initially back in June/July (possibly a bit earlier).

{lang: 'en-GB'}

Whats coming in FreeBSD 8.0?

September 13th, 2009 No comments

I’ve been looking at FreeBSD to run at home, whilst up until now its been mostly virtualised installs (and OpenBSD powers our gateway – 385 days of uptime!). Came across an article describing the changes coming in FreeBSD 8.0 in a nice summary format (similarly, theres one for FreeBSD 7.0 too!)

Well worth the read if your interested in the world of FreeBSD.

{lang: 'en-GB'}
Categories: Beta, BSD, Developer, File Systems Tags: , , ,

Redhat 5.4 released, CentOS 5.4 is coming soon!

September 3rd, 2009 No comments

If you haven’t heard already, Redhat has released the eagerly anticipated 5.4 release of Redhat Enterprise Linux at their Redhat Summit in Chicago. As expected, Redhat looks to have moved from using Xen as their favoured virtualisation hypervisor to using KVM (which is an integral part of the Linux Kernel). All this will eventually go into RHEV.

All the changes in this release are documented in the  Release Notes, unfortunately Ext4 is still not considered usable in this release (they’re targetting for RHEL6 possibly).

So what of the RHEL clone CentOS? Possibly a 2-4 week delay it seems. WOO! In the meantime, upgrading from 5.3 is easy peasy.

{lang: 'en-GB'}

Mounting and activating LVM Volumes from BootCD to recover data in linux

September 2nd, 2009 3 comments

I’ve been working heavily with Red Hat Enterprise Linux (and subsequently CentOS) the past few months (shh! dont tell my MSFT homey!) and one of the great things about CentOS and RHEL is that they both install using LVM – which is a helluvah lot easier when time passes and you realise your running out of space on a drive.

But today I had to recover some data from an LVM partition and copy over some bits to another partition without actually booting the CentOS install (it was bj0rked by yours truely!). What to do? Throw in a Ubuntu LiveCD (or another) and just mount the partitions 🙂

First thing we need to do is install LVM – remember we need to be sudo for these to work.

$ aptitude install lvm2

Then scan for any available physical volumes on any of the drives.

$ pvscan

Scan for any Volume Groups that may be present.

$ vgscan

Now activate any of the Volume Groups that it finds, running this makes the logical volumes known to the kernel.

$ vgchange –available y

Then let it scan for any Logical Volumes on any drives

$ lvscan

After running the logical volume scan it will show the path to the LVM mount path, for my boxen it gives something like this

ACTIVE            ‘/dev/LVM/Data‘ [5.26 TB] inherit

You simply mount the path specified and browse like normally 🙂

$ mount /dev/LVM/Data /mnt


{lang: 'en-GB'}

Linux Btrfs: A short history of btrfs

August 2nd, 2009 No comments

Valerie Aurora (such a cool name!) takes a look into the history of Btrfs, well written and easy to follow.

{lang: 'en-GB'}

Linus releases Linux 2.6.30

June 11th, 2009 No comments

Linus has released 2.6.30 of the kernel, list of changes are available in the Linux Kernel Newbies guide.

This version adds the log-structured NILFS2 filesystem, a filesystem for object-based storage devices, a caching layer for local caching of NFS data, the RDS protocol which delivers high-performance reliable connections between the servers of a cluster, a distributed networking filesystem (POHMELFS), automatic flushing of files on renames/truncates in ext3, ext4 and btrfs, preliminary support for the 802.11w drafts, support for the Microblaze architecture, the Tomoyo security module, DRM support for the Radeon R6xx/R7xx graphic cards, asynchronous scanning of devices and partitions for faster bootup, MD support for switching between raid5/6 modes, the preadv/pwritev syscalls, several new drivers and many other small improvements.

One interesting change (amongst the many) is that we have this new feature called Fastboot. Essentially, when we boot right now, there is significant cycles wasted waiting for the device probing to complete. From Johnathan Corbet’s article on LWN:

There are many aspects to the job of making a system boot quickly. Some of the lowest-hanging fruit can be found in the area of device probing. Figuring out what hardware exists on the system tends to be a slow task at best; if it involves physical actions (such as spinning up a disk) it gets even worse. Kernel developers have long understood that they could gain a lot of time if this device probing could, at least, be done in a parallel manner: while the kernel is waiting for one device to respond, it can be talking to another. Attempts at parallelizing this work over the years have foundered, though. Problems with device ordering, concurrent access, and more have adversely affected system stability, with the inevitable result that the parallel code is taken back out. So early system initialization remains almost entirely sequential.

This new release attempts to address this problem.

Arjan hopes to succeed where others have failed by (1) taking a carefully-controlled approach to parallelization which doesn’t try to parallelize everything at once, and (2) an API which attempts to hide the effects of parallelization (other than improved speed) from the rest of the system. For (1), Arjan has limited himself to making parts of the SCSI and libata subsystems asynchronous, without addressing much of the rest of the system. The API work ensures that device registration happens in the same order is it would in a strictly sequential system. That eliminates the irritating problems which result when one’s hardware changes names from one boot to the next.

How well it does it, I guess we’ll have to wait and see. But here’s a bit of a tidbit in the kernel for the new Microblaze implementation.

void __init setup_cpuinfo(void)
struct device_node *cpu = NULL;

cpu = (struct device_node *) of_find_node_by_type(NULL, "cpu");
if (!cpu)
printk(KERN_ERR "You don't have cpu!!!\n");

printk(KERN_INFO "%s: initialising\n", __func__);

DUDE, You dont’ have cpu!!!

{lang: 'en-GB'}

THIS IS FEDORA: Fedora 11 Released

June 9th, 2009 No comments

This is FEDORA.

Fedora 11 aka Leonidas has been released. Whilst the front page is yet to be updated the mirrors are being updated as I write and ISO’s are being propogated.

Download ISO:

In Australia? Try the local mirrors:

Bit of a torrenter? See the Torrent Tracker page.

Approximate sizes (from internode):

Fedora-11-i686-Live.iso             688M
Fedora-11-i686-Live-KDE.iso         686M
Fedora-11-x86_64-Live.iso           691M
Fedora-11-x86_64-Live-KDE.iso       693M

See the Fedora 11 Release Notes for more information about changes in this release, the Fedora 11 feature list or the Unoffficial Fedora 11 Guide.

I’ve been awaiting this release primarily for the Linux Kernel v2.6.29 (in comparison to Jaunty‘s Kernel 2.6.28) which brings a slew of updates to the table – in particular KMS (Kernel mode setting – flicker free graphics), the inclusion of Btrfs in the kernel for preliminary testing and better memory mangement. Ofcourse Fedora 11 ships with 1.6 as well. With the inclusion of GCC 4.4 all packages are now compiled with gcc4.4 too.

I’ve only dabbled in Fedora 10, but I think its a worthy move from my primarily Ubuntu lifestyle.

Whats really interesting though, is that Ubuntu 9.10 seems to have a decent performance bump, so whilst the wait for Fedora 11 is over, its time to get excited about the snappier the Karmic Koala.

{lang: 'en-GB'}

Rebuilding Zeus: Part 1 – Preliminary Research and Installing Ubuntu 9.04 RC1

April 19th, 2009 1 comment

Just spent a fair chunk of today getting a rebuild of Zeus going – our affectionately dubbed Ubuntu server at home. This is the third rebuild (hardware wise) in the past 5 years (sheesh its been that long?), but I’m not complaining. First Ubuntu’fied version (5.10 – Breezy Badger) ran on an Pentium 4 3Ghz (Socket 478), noisey little guy that sucked quite a bit of power which was my old development box  that served me well.

Then with the release of the fornicating Feisty Fawn (Ubuntu 7.04) I moved over the server to an AMD box, a AMD 3200+ on a ASUS A8N-SLI Deluxe (which featured the incredibly shakey NForce 4 SLI chipset) with a modest 2Gb of DDR ram.

NVIDIA nForce4 APIC Woes

Unfortunately I didn’t realise that by using the NForce 4 chipset under Linux I’d have to wrestle with APIC issues due to an issue with the chipset and regressions.

If you fall into the above hole, edit your grub boot menu:

$ sudo vi /boot/grub/menu.lst

And change your booting kernel with two new options:

title           Ubuntu 7.10, kernel 2.6.22-14-generic
root            (hd0,5)
kernel          /vmlinuz-2.6.22-14-generic root=UUID=c7a7bf0a-714a-482e-9a07-d3ed40f519f5 ro quiet splash noapic nolapic
initrd          /initrd.img-2.6.22-14-generic

You may want to also add that to the recovery kernel just incase. This will effectively disable the onboard APIC Controller as its quite buggy. More information is available on Launchpad.

Its been chugging along nicely for the past 2 years – the time is always in accurate (about 8 minutes ahead) but the uptime right now is:

thushan@ZEUS:~$ uptime
19:54:06 up 147 days,  7:27,  7 users,  load average: 0.22, 0.43, 0.32

So I figured its time to put these issues behind and redo the server infrastructure at home.


There are some goals in this rebuild.

  • Try out Ext4 and remove the use of ReiserFS and JFS which don’t seem to be going anywhere (JFS here and here). ZFS would be nice (but no FUSE!) to try out, but I’m hoping Btrfs brings some niceties to the table.
  • The new Zeus needs to look at virtualisation a little more. Right now, alot of the QA for Windows builds of our stuff is done on several machines all over the place. Consolidate them to 1 Server with VT support, plenty of RAM and use a hypervisor (mentioned later) to manage testing.
  • Provide the same services as the existing Zeus:
    • SVN + Trac
    • Apache
    • MySQL / Postgres
    • File hosting, storing vault sharing content across the computers around (the whole house is gigabitted).
    • Fast enough to run dedicated servers for Unreal Tournament, Quake, Call of Duty 4 and a few other games.
    • Profiles, user data needs to be migrated
  • Messing about with the Cloud-Computing functionality in Jaunty.
  • Provide a backend for the Mythbuntu frontends.
  • Last another 2 years


My previous workstation motherboard was the awesome ASUS P5W-DH Deluxe with a Intel QX6850 CPU, powered by the Intel 975 Chipset that has lasted for alot longer than anyone had predicted. But earlier this year I had a problem with the board that warranted a RMA request. As I had to have a machine I ended up buying an ASUS P5Q-Pro and did a re-install (same CPU). So instead of selling off the P5WDH I’ve decided to use that board coupled with a Intel E6750 which was picked because it supports Intel VT and it was lying around. Otherwise I _wouldnt_ consider using this setup – overkill!!! But I do want this setup to last and be beefy enough to support a little more than a few VM’s running concurrently.

Pretty shots are available here. Otherwise, the test bench, the tuniq and a pretty shot of my setup at home (no its not clean).


Clearly Ubuntu  9.04 is where its at, its sleeker, blindingly fast to boot thanks to the boot time optimisations and sexier desktop thanks to the visual tweaking and the new Gnome 2.26 inclusion. The installer has matured greatly, gone is the plain old boring partition editor based on GParted and a sleek new timezone picker. To make the most of the RAM in the box, 64bit edition of Ubuntu-desktop is what I’m installing.

Installing Ubuntu, use a UNetbootin!

So you grabbed the latest ISO, burn and chuck it into an optical drive and way you go aye… *IF THIS WAS 2005*!!! As mentioned in an earlier post, grab a copy of UNetbootin, select the ISO you mustered from your local free ISP mirror and throw it inside your USB thumb drive. These days USB drives are dirt cheap, I picked up a Corsair Voyager 8Gb (non-GT) for AUD$39.

Why would you want to do that?  You wont need to use CD-RWs, delete and put another ISO and whats more, it will install in no time. With the VoyagerI got the core OS installed in 5 minutes – after selecting the iinet local software sources mirror. Funky?


I got into the Virtualisation game early, VMWare 2.0 (2000-2001) is where it all began after seeing a close friend use it. Unfortunately I had to almost give up my kidney to afford to buy it. Then a brief time  I moved to Connectix VirtualPC when VMWare 4.0 arrived and messed up my networking stack, but went back to VMWare 3.0 for a little while. Then eventually moved back to VirtualPC 2004 after Microsoft acquired Connectix (it was free from the MSDN Subby) and back again on VMWare with version 5.

Fast forward to 2009, we have some ubber quality hypervisors. VMWare still has the behmoth marketshare but a little birdie got some extra power from the Sun and impressed everyone lately with its well roasted features. But the critical decision was which hypervisor to use, we have VMWare Server (1.0 or the 2.0 with its web interface – errr!), XenServer (which is now owned by Citrix) or VirtualBox.

After playing around with VMWare Server 1.0 last year I was left wanting more, so naturally I moved to VMWare Server 2.0 not knowing that the familiar client interface is GAWN, instead in its place is a web based implementation – VI Web Access.  It was slow and clunky and took a while to get used to – but the fact that it showed the status via the web was funky, but runnig an entire VM Session via a browser plugin (which hosed every so often) was far from impressive 🙁

It finally boiled down to deciding to go with VMWare Server 1.0 (released mid-2006), leaning onto XenServer (seems to include a bit of a learning curve) or to move to a brighter pasture with Sun VirtualBox – which is what I use on my development boxes. I’m still playing around with all three to see how they fair. I am a little biased towards VirtualBox (  I reckons its awesome ja! )  but as this is a long-term build I can’t knock out VMWare Server out just yet nor go the full para-virtualisation with XenServer which is probably what I’ll end-up doing.

I’ve only got a few days before the final release of Ubuntu 9.04 arrives and all this research prior is to make sure things go smoothly next weekend.

{lang: 'en-GB'}

Funky Jaunty: Ubuntu 9.04 Release Candidate, its almost here!

April 17th, 2009 1 comment

Excited Jen? You should be. Ubuntu 9.04 (aka Jaunty Jackalope) will be out in less than a week, and if you cant wait, grab the Release candidate and give it ago.

Amongst the highlights:

Our main server box (Zeus) is still running 7.04, so I think its about time I upgrade the little guy to the latest and greatest, 2 years later with a fresh dose of hardware.

{lang: 'en-GB'}