Archive

Posts Tagged ‘zeus rebuild’

Part III: Zeus rebuilt and configured!

November 21st, 2009 1 comment

I’ve spent the last month working with the newly built zeus server which is now powered by OpenSolaris (2009.06).

Here’s my final hardware specifications:

  • CPU: AMD Athlon X2 5050e – 2.6Ghz (45W TDP, AMD-V)
  • Motherboard: Gigabyte GA-MA790X-UD4P ( AMD 790X Chipset )
  • RAM: 2x Corsair TWIN2X4096-6400C5 (4Gb kit x 2 = 8Gb)
  • Graphics: ASUS 9400GT PCI-Express
  • Hard Disks:
    • rpool – 2x WD740ADFD – 74Gb 10K RPM, 16Mb Cache (mirror’d)
    • tank – 6x WD1002FBYS – 1TB, 7200RPM, 32Mb Cache (raidz)
    • base – 2x WD7500AAKS – 750Gb, 7200RPM, 16Mb (mirror’d)
  • Addon cards:
    • SATA – Silicon Image, Inc. SiI 3132 Serial ATA Raid II Controller
    • NICs – 2x Intel Corporation 82545GM Gigabit Ethernet Controller (e1000g)

I’ve finally managed to get the GA-MA790X-UD4P on the OpenSolaris HCL list – woo! Unfortunately the onboard NIC will not work in the 2009.06 release even though it is detected:

Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller

Maybe in a future release. Make sure you update the BIOS as OpenSolaris may have an issue with the USB controller being ‘mis-configured’ otherwise.

Just for kicks I went to Jaycar and bought myself a power usage meter to measure the watts used by the new boxen (see a review of the Mains Power Meter on DansData).

Old Zeus

  • Idle: 380W
  • Load: 413W

New Zeus

  • Idle: 232W
  • Load: 270W

Nice, with an Intel Atom based server it could go _a lot_ lower, but I’m happy with this.

{lang: 'en-GB'}
Share

Part II: Rebuilding ZEUS – The Operating System, FileSystem & Virtualisation

October 18th, 2009 No comments

Now that I’ve decided what I want out of the server (and the hardware I’ve got), its time to workout what operating system to run the system on. Currently, ZEUS is running on Ubuntu Gutsy (7.10) which is running LVM with an XFS volume holding approximately 2.5Tb worth of data. There’s a cron job that defrags the XFS volume to keep things in order.

The Operating System

As the operating system is no longer maintained (my oversight into how long it would survive) I have to find an OS that supports the hardware platform without hacky hacky bits (and by this I mean avoiding buggy ACPI and issues with the NForce4 chipset and IRQ problems) and has a file system that will benefit long term.

There were a few considerations:

  • Ubuntu 8.04.x LTS
    I like Ubuntu, I’m comfortable with the user land and find the Debian package system (in particular the dependency resolving) most impressive. Hardware is well supported and 8.04.3 (at the time of writing) boots on the hardware I originally selected (Intel) and the new configuration I recently selected (AMD). I could most definitely use Ext4 but the problems with data-loss (which I’ve reproduced on several occasions on desktop machines) scare me.FileSystem: I’d have to adopt either XFS or Ext4 on an LVM to factor in future-proofing, maybe get some fakeRAID happening for redundancy.
    Installation
    : comes with a Server edition that’s bare bones allowing it to be a minimalistic installation which is always nice!
  • Ubuntu 9.04
    Initially when I started to rebuild Zeus back in April I wanted to use Ubuntu 9.04, I was really excited about Ext4 and the promise of a brand-spanking new file-system and what it would bring to the table. Unfortunately after using Ext4 with 9.04 I’ve come to realise its probably not the wisest to trust your data with it just yet – unless you get yourself a UPS! Laptop seems to be chugging nicely though.Installation: Like LTS, comes with a Server edition that’s bare bones allowing it to be a minimalistic installation which is always nice! (copy/paste!) Unfortunately picking 9.04 when 9.10 is just around the corner is not going to be ideal, I’ll be stuck with where I am right now in a year or so.

So in case the sudden influx of OpenSolaris posts didnt give you the hint, I decided on OpenSolaris to power the new iZeus 2.0, actually no that sounds lame, zeusy will be the new ZEUS until ZEUS is retired in which case zeusy becomes zeus (confused?).

Why ZFS?

ZFS is one of those file-systems you look at and think, wow! Why didn’t anyone else think of that before?

  • Very simple administration – you only use two commands, zpool and zfs.
  • Highly scalable – 128-bit means we can hold 16 exabytes or 18 Million terabytes worth of data! More porn for you! XFS can no doubt handle the TBs we use for our home boxes now, but no-chance you can get the performance or benefits of ZFS in Ext3/Ext4 or XFS.
  • Data integrity to heal a filesystem (no fsck’ing around!) – 256bit checksuming to protect data, if ZFS detects a problem it will attempt to reconstruct the bad block and continue on its merry way (utilising available redundancy)
  • Compression – you can elect to compress a particular file-system or a hierarchy just by setting one command! I’m thinking things like logs here.
  • No hardware dependency – JBOD on a controller, let ZFS maintain the RAID volumes in software. Checkout Michael Pryc’s crazy adventure with ZFS using USB thumb drives and Constantin’s original voyage with USB drives! RAID-Z is essentially RAID-5 without the write-hole problems has plagued it if power is lost during a write, it can also survive a loss of a drive (with RAIDZ-2 you can loose two drives).
  • Happy snaps for free! Snapshot (a live) file-system as many times as you like, again one easy command. Its like that tendency to hit {CTRL+S} when your working in Windows from back in the days of Windows 9x, snapshot regularly!

So ZFS sounds much like marketing spiel right now, best thing since sliced bread, cooler than a cucumber, and you’d be right it is cool and the best thing since filesystems came to being. Over the coming days I’ll post some more on my musings with ZFS – keeping in mind that I’m still learning these things. It helps to have lots of hardware to play with, but even if you don’t, you can knock up a virtual version of OpenSolaris in VirtualBox, create some virtual disks and try it out.

There are a few caveats that I’ve come across though using ZFS, one is memory! ZFS will try and cache as much data as it can in RAM, so if you have 8Gb of RAM (as I have in this box) it will happily use as much of it as it can afford. Rightfully so, I was getting ~96MB/s transfering a 16Gb MPEG from one box to the other over our Gig link (thats from one end of the house to the other!) mind you this was just a test configuration using 2x 74Gb Western Digital Raptors (WD740ADFD) in a RAID-0 style hitting a single 150Gb Western Digital Raptor (WD1500ADFD). They could have gone much higher, but I was happy with that.

There are also (as of writing) no recovery tools for ZFS, but these are slated to arrive soon (Q4 2009) which is quite scary after you read this post about a guy loosing 10Tb worth of data, however a possible revert to an older uberblock may fix some problems.

Virtualisation

Initially I wanted to concentrate quite a bit on Virtualisation, I tried Xen on OpenSolaris. It was quite easy to setup a Xen Dom0 in OpenSolaris but with the 2009.06 release you had to tweak the Xen setup a bit. I wasn’t too enthusiastic about using Xen after seeing the performance lag in Windows in my musings. Instead I’m opting for my crush, VirtualBox.

So why use VirtualBox when you can get a bare-metal hypervisor? Firstly, performance seems to be sluggish with Xen for me (I didn’t investigate this too much), secondly I want to be able to run the latest and greatest OS’s out without worrying about upgrading Xen (I’m a sucker for OS’s!). VirtualBox development has accelerated at a feverish pace, I started with VirtualBox 1.3 in 2007 and its come an insanely long way since then. When a new release comes along, its as easy as updating VirtualBox and getting all the benefits. Plus with SunOracle‘s backing of VirtualBox you know things are going to work well on OpenSolaris, the Extras repository of VirtualBox makes it as easy as doing a pkg update.

I’m still quite intrigued by the way KVM is heading and how it will pan out, but for the future zeus, it will be VirtualBox.

{lang: 'en-GB'}
Share

Rebuilding Zeus – Part I.5: Change of heart, change of hardware.

October 14th, 2009 No comments

After a bit of digging around, my original spec’d hardware I’ve decided is too much for a boxen that will be on 24×7, especially with the rates for electricity going up next year – every little Watt counts. The existing 65W CPU isn’t ideal, instead I’m opting for a 45W CPU instead and this means – looking at the lineup, its going to be a walk down AMD way. Less watts, less heat and less noise, noice! See AMD’s product roadmap for 2010-2011.

The original specifications I mentioned were:

I’ve decided to change the CPU and Motherboard but keep the other bits and bobs – I could loose the graphics card and go onboard but I felt like leaving it there for now. The target budget is $250 maximum for both CPU+Mobo, so this means I’m sticking with DDR2 which implies AM2+ but it must also satisfy:

  • CPU has to be 45W and be atleast 1.6Ghz, dual core no more, has to support Virtualization.
  • Motherboard has to Support 8Gb (most boards doo!),  have atleast 2x  PCIe and a PCI slot, it would be nice if the network cards work (gigabit) but no fuss if it doesnt. No crazy shebangabang Wifi, remotes etc bling and if it has onboard Video great, otherwise its OK to use a crappy card.

I picked the AMD Athlon X2 5050e CPU because it was cheap (~$80), supports a 45W, has virtualisation and is an AM2. Next was the motherboard, looking at the ASUS, Gigabyte & XFX models as my target.

Chipset wise only the following fit the criteria for a possible match because others just don’t have the number of SATA ports available onboard. Primarily AMD boards are supplied by NVIDIA or AMD themselves.

Initially I looked at the ASUS  boards (they’ve been nothing but rock solid for me in the past) but after a lot of research scouring through the manufacturer sites I ended up picking out the Gigabyte GA-MA790X-UD4P which is based on the AMD 790X Chipset. The board came with 8x SATA Ports, 3x PCIe and 2x PCI and a  Gigabit NIC all for a $137 from PCCaseGear. Not only was the power consumption lowered but the noise and heat generated was substantially lower too!

Coming in close was the ASUS M4N78 PRO or the ASUS M4A78 PRO, each of those unfortunately didn’t have as many SATA ports (2-less) nor the PCIe ports (1-less).

GA-MA790X-UD4P
{lang: 'en-GB'}
Share

Part I: Rebuilding ZEUS, the journey of training the next home server

October 6th, 2009 No comments

I’ve been looking at upgrading our existing home server from the archaic (and unsupported!) Ubuntu Gutsy (because I was feeling gutsy at the time) to something newer, fresher and that will last me atleast another 2 years. This is purely for my documentation.

Current Setup

Currently running an AMD setup with Ubuntu Gutsy (7.10) – I didn’t think it would last this long, honest! Ubuntu 6.06 had too many issues with the hardware/driver incompatibilities.

DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=7.10
DISTRIB_CODENAME=gutsy
DISTRIB_DESCRIPTION="Ubuntu 7.10"

On an ASUS A8N-SLI Deluxe motherboard (because you know, servers need SLI!) sporting a AMD Athlon64 3200+ (the only AMD CPU at home!) with 2Gb of RAM (hey, DDR1 wasn’t cheap enough!)

lspci

00:00.0 Memory controller: nVidia Corporation CK804 Memory Controller (rev a3)
00:01.0 ISA bridge: nVidia Corporation CK804 ISA Bridge (rev f3)
00:01.1 SMBus: nVidia Corporation CK804 SMBus (rev a2)
00:02.0 USB Controller: nVidia Corporation CK804 USB Controller (rev a2)
00:02.1 USB Controller: nVidia Corporation CK804 USB Controller (rev a3)
00:04.0 Multimedia audio controller: nVidia Corporation CK804 AC'97 Audio Controller (rev a2)
00:06.0 IDE interface: nVidia Corporation CK804 IDE (rev f2)
00:07.0 IDE interface: nVidia Corporation CK804 Serial ATA Controller (rev f3)
00:08.0 IDE interface: nVidia Corporation CK804 Serial ATA Controller (rev f3)
00:09.0 PCI bridge: nVidia Corporation CK804 PCI Bridge (rev f2)
00:0a.0 Bridge: nVidia Corporation CK804 Ethernet Controller (rev f3)
00:0b.0 PCI bridge: nVidia Corporation CK804 PCIE Bridge (rev f3)
00:0c.0 PCI bridge: nVidia Corporation CK804 PCIE Bridge (rev f3)
00:0d.0 PCI bridge: nVidia Corporation CK804 PCIE Bridge (rev f3)
00:0e.0 PCI bridge: nVidia Corporation CK804 PCIE Bridge (rev a3)
00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration
00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map
00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller
00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control
01:00.0 VGA compatible controller: nVidia Corporation G70 [GeForce 7300 GT] (rev a1)
05:06.0 RAID bus controller: Silicon Image, Inc. SiI 3114 [SATALink/SATARaid] Serial ATA Controller (rev 02)
05:07.0 RAID bus controller: Silicon Image, Inc. Adaptec AAR-1210SA SATA HostRAID Controller (rev 02)
05:0a.0 RAID bus controller: Silicon Image, Inc. SiI 3114 [SATALink/SATARaid] Serial ATA Controller (rev 02)
05:0b.0 FireWire (IEEE 1394): Texas Instruments TSB43AB22/A IEEE-1394a-2000 Controller (PHY/Link)
05:0c.0 Ethernet controller: Marvell Technology Group Ltd. 88E8001 Gigabit Ethernet Controller (rev 13)

/proc/cpuinfo

processor       : 0
vendor_id       : AuthenticAMD
cpu family      : 15
model           : 47
model name      : AMD Athlon(tm) 64 Processor 3200+
stepping        : 2
cpu MHz         : 1000.000
cache size      : 512 KB
fdiv_bug        : no
hlt_bug         : no
f00f_bug        : no
coma_bug        : no
fpu             : yes
fpu_exception   : yes
cpuid level     : 1
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt lm 3dnowext 3dnow up pni lahf_lm ts fid vid ttp tm stc
bogomips        : 2011.59
clflush size    : 64

This faithful boxen has been the primary source of our fileserver (XFS+LVM 3Tb) – used internal to our house and also by others who upload their stuff to be backed up. Subversion repositories, Apache/LightHttpd test servers for PHP work, Virtualisation for Windows 2003, 2000 and SqlServers running for testing and several other things (think: TeamCity, Continous Integration tools, Confluence etc). Its also been damn convenient when your at work or on holidays to be able to login, muse about via SSH and even fix things remotely.

Needs & Wants

The new server will need to fufil the following roles:

  • Function as a NAS to continue to offer backup (via users home directories) and storage options
    • No file-system constraints asside from no Ext3 or ReiserFS.
  • Offer the ability to still run Virtual Machines, need to virtualise CentOS, Ubuntu and Windows for testing, they’ll be running in  Bridged mode
  • No real need for a Gui (I can consider myself a little l33t than a few years ago)
  • Run a Subversion repository (not that hard!)

The idea is to have a bare bones operating system install and have the virtual machines handle the hard and ugly work – webservers to test things, servers to try development deployments (java) and other bits and pieces. The core OS just has to manage the NAS and allow the ability to SSH in to offer subversion access.

Hardware

The hardware I’ve picked from things I had around the place, the only thing I’ve bought is just new sticks of RAM.

  • Motherboard: ASUS P5QL-PRO
    This board offered some excellent specifications via the P43 chipset, the things I looked for was the number of SATA ports ‘out of the box’ – 6 native SATA2, the number of 1x PCIe slots (2!) for future addions of PCIe SATA adapters and the maximum amount of memory possible (8Gb). Oh ofcourse, something cheapy and that can run the CPU I had around. A Gigabit NIC was also important (dual would be better!) but if it wasn’t supported I had a trusty Intel PRO 1000MT Server PCI cards to fill the void – almost everything supports them (e1000)!
  • CPU: Intel Core-2 E6750 – 2.66Ghz (65W TDP, VT)
    Importance was Intel-VT support, low TDP and a dualcore thats not too high.
  • RAM: Corsair TWIN2X4096-6400C5 (4Gb kit x 2 = 8Gb)
    Cheapy cheapy, twice the fun of a regular kit, slightly higher CAS, but who CAreS this isnt being overclocked.
  • Graphics: ASUS 9400GT PCI-Express
    The cheapest graphics card to be found at the legendary& award winning computer store MSY Technologies. Depending on how the drivers go (I’m usually biased towards ATI for all Linuxes) I might endup paying for a ATi card later.

Next up the investigation, be warned though I started this initially back in June/July (possibly a bit earlier).

{lang: 'en-GB'}
Share