Archive

Archive for the ‘Virtualisation’ Category

Oracle releases VirtualBox 3.2

May 20th, 2010 1 comment

With the Sun now set, Oracle has released VirtualBox 3.2 finally ūüôā In particular some lovely optimisations for the newer Intel Core i5/i7 processors, Large¬† Page support (which helps significantly on Windows x64 and Linux) as well as a very welcome optimisation on the networking in VirtualBox as well as multi-monitor support for Windows Guests. Whats more RDP sessions are now accelerated (VRDP).

Amongst the changes from the changelog:

This version is a major update. The following major new features were added:

  • Following the acquisition of Sun Microsystems by Oracle Corporation, the product is now called Oracle VM VirtualBox and all references were changed without impacting compatibility
  • Experimental support for Mac OS X guests (see the manual for more information)
  • Memory ballooning to dynamically in- or decrease the amount of RAM used by a VM (64-bit hosts only) (see the manual for more information)
  • Page Fusion automatically de-duplicates RAM when running similar VMs thereby increasing capacity. Currently supported for Windows guests on 64-bit hosts (see the manual for more information)
  • CPU hot-plugging for Linux (hot-add and hot-remove) and certain Windows guests (hot-add only) (see the manual for more information)
  • New Hypervisor features: with both VT-x/AMD-V on 64-bit hosts, using large pages can improve performance (see the manual for more information); also, on VT-x, unrestricted guest execution is now supported (if nested paging is enabled with VT-x, real mode and protected mode without paging code runs faster, which mainly speeds up guest OS booting)
  • Support for deleting snapshots while the VM is running
  • Support for multi-monitor guest setups in the GUI for Windows guests (see the manual for more information)
  • USB tablet/keyboard emulation for improved user experience if no Guest Additions are available (see the manual for more information).
  • LsiLogic SAS controller emulation (see the manual for more information)
  • RDP video acceleration (see the manual for more information)
  • NAT engine configuration via API and VBoxManage
  • Use of host I/O cache is now configurable (see the manual for more information)
  • Guest Additions: added support for executing guest applications from the host system (replaces the automatic system presimparation feature; see the manual for more information)

Download from VirtualBox or get the Windows build. I’m really hoping the good Oracle keeps VirtualBox open, this is one kickass bit of kit.

{lang: 'en-GB'}
Share

VirtualBox 3.2.0 Beta 1 Released!

May 3rd, 2010 No comments

Finally downloaded the latest 3.2.0 release of VirtualBox today and gave it ago!

From the forum post for this pre-release.

VirtualBox Version 3.2.0 is a major update. The following major new features were added:

  • Following the acquisition of Sun Microsystems by Oracle Corporation, the product is now called¬†Oracle VM VirtualBox¬†and all references were changed without impacting compatibility.
  • Experimental support for¬†Mac OS X guests
  • Memory ballooning¬†to dynamically in- or decrease the amount of RAM used by a VM (64-bit hosts only) (see the manual for more information)
  • CPU hot-plugging¬†for Linux (hot-add and hot-remove) and certain Windows guests (hot-add only) (see the manual for more information)
  • New Hypervisor features: with both VT-x/AMD-V on 64-bit hosts, using¬†large pages¬†can improve performance (see the manual for more information); also, on VT-x,¬†unrestricted guest execution¬†is now supported (if nested paging is enabled with VT-x, real mode and protected mode without paging code runs faster, which mainly speeds up guest OS booting)
  • Support for¬†deleting snapshots while the VM is running
  • Support for¬†multi-monitor¬†guest setups in the GUI (see the manual for more information)
  • USB tablet/keyboard emulation¬†for improved user experience if no Guest Additions are available
  • LsiLogic SAS controller emulation
  • RDP video acceleration
  • NAT engine configuration via API and VBoxManage
  • Guest Additions: added support for¬†executing guest applications¬†from the host system
  • OVF:¬†enhanced OVF¬†support with¬†custom namespace¬†to preserve settings that are not part of the base OVF standard

In addition, the following items were fixed and/or added:

  • VMM: fixed crash with the OpenSUSE 11.3 milestone kernel during early boot (software virtualization only)
  • VMM: fixed OS/2 guest crash with nested paging enabled
  • VMM: fixed Windows 2000 guest crash when configured with a large amount of RAM (bug¬†5800)
  • VMM: fixed massive display performance loss (AMD-V with nested paging only)
  • Linux/Solaris guests: PAM module for automatic logons added
  • GUI: guess the OS type from the OS name when creating a new VM
  • GUI: added VM setting for passing the time in UTC instead of passing the local host time to the guest (bug¬†1310)
  • GUI: fixed seamless mode on secondary monitors (bugs¬†1322 and¬†1669)
  • GUI: added –seamless and –fullscreen command line switches (bug¬†4220)
  • Settings: be more robust when saving the XML settings files
  • Mac OS X: rewrite of the CoreAudio driver and added support for audio input (bug¬†5869)
  • Mac OS X: external VRDP authentication module support (bug¬†3106)
  • Mac OS X: Moved the realtime dock preview settings to the VM settings (no global option anymore). Use the dock menu to configure it.
  • Mac OS X: added the VM menu to the dock menu
  • 3D support: fixed corrupted surface rendering (bug¬†5695)
  • 3D support: fixed VM crashes when using¬†ARB_IMAGING¬†(bug¬†6014)
  • 3D support: fixed assertion when guest applications uses several windows with single OpenGL context (bug¬†4598)
  • 3D support: added¬†GL_ARB_pixel_buffer_object¬†support
  • 3D support: added OpenGL 2.1 support
  • 3D support: fixed Final frame of Compiz animation not updated to the screen (Mac OS X only) (bug¬†4653)
  • Added support for virtual high precision event timer (HPET)
  • LsiLogic: Fixed detection of hard disks attached to port 0 when using the drivers from LSI
  • NAT: fixed ICMP latency (non-Windows hosts only; bug¬†6427)
  • Keyboard/Mouse emulation: fixed handling of simultaneous mouse/keyboard events under certain circumstances (bug¬†5375)
  • Shared folders: fixed issue with copying read-only files (Linux guests only; bug¬†4890)
  • OVF: fixed mapping between two IDE channels in OVF and the one IDE controller in VirtualBox

Bootilicious! Download links are on the site (updated for BETA2).

{lang: 'en-GB'}
Share

VirtualBox 3.1 released!

December 1st, 2009 No comments

Just when you thought you can start a new month without some new software, Sun has blessed the world with a ray of VirtualBox 3.1 goodness on us all! All hail the Sun. I’ve been using the Betas and trying out the spanking awesome Teleportation feature in VirtualBox 3.1. So lets take a bit of a look at the new grub.

Beam me up Scotty!

You know, people say the catch phrase thinking its from Star Trek, but did you know that it was never actually mentioned in any episode?

Teleportation or ‘Live Migration‘ in Xen/KVM¬† or vMotion in VMWare allows you to move a running virtual machine to another host without any downtime. Sun brings us this ‘Enterprise’ feature to VirtualBox. Whats even cooler, is that you can teleport your running VM on different host platforms (Windows -> OpenSolaris or Linux, vice versa) but not from one hardware set (Intel) to another (AMD) unless they both have the same instruction-sets. The transport layer for the teleportation is TCP/IP, so as long as the agreed port is open and accessible you can even teleport it through the tubes! (assuming you have a fast link like those pesky Dutch)

There are a few conditions and caveats as I’ve found. Firstly you must ensure (as you’d expect) the target VM has to have the exact same configuration as the source VM (same RAM, graphics memory, storage, CD/DVD images etc) the other thing is to be weary of the CPUs the host computer has. As long as its between the same generations (different clock speeds are OK) it should work (I tried between a QX6850 -> E6600 but QX6850->AMD X2 4600+ wasn’t so pretty!).

Once you’ve configured the target host to match the source host, time to ask VirtualBox to keep its eyes open for an incoming beam.

VBoxManage modifyvm [VirtualMachineName] --teleporter on --teleporterport [Port]

Then on the source host, send out the beams to initiate the teleportation:

VBoxManage controlvm [VirtualMachineName] teleport --host [TargetIP] --port [Port]

Give it some time to think and if you tried a localhost migration, it should migrate seamlessly ūüôā

Scotty doesn’t know

Scotty doesn’t know about the other little changes, but you will. The new VirtualBox has lots of refinements in the UI. For one, new icons for all the Guest operating systems. The settings window has had a make over and includes ‘optimal settings’ detection.

Windows 2003 VM in VirtualBox 3.1

Windows 2003 VM in VirtualBox 3.1

Here its telling me my Windows 2003 VM should have atleast 20Mb Video Memory assigned to it to work well in full-screen mode. Heading over to the Display options in VirtualBox 3.1 we find that the Video Memory selectors have got little indicators now, as well as the inclusion of 2D Video Acceleration.

Windows 2003 VM - VirtualBox 3.1 Display Settings

Windows 2003 VM - VirtualBox 3.1 Display Settings

Depending how ever many cores you have, it will highlight what you should set as the maximum number of cores available for your VirtualMachine as well as the recommended RAM allocation. This is what I see in my Intel QX6850 development workstation.

VirtualBox 3.1 System Processor Settings

VirtualBox 3.1 System Processor Settings

VirtualBox 3.1 - Motherboard Settings

VirtualBox 3.1 - Motherboard Settings

VirtualBox now also has experimental support for Extensible Firmware Interface (EFI) which will eventually replace the aging BIOS bootstrap (which is the default). Well known operating systems that boot via EFI include Windows Vista and Windows 7, Apple OS X and Fedora 11+.

The Storage controls in VirtualBox GUI has also had a bit of a make over. The options to select a disk and a controller have changed, CD/DVD drives can be attached to an arbitrary IDE controller too now!

VirtualBox 3.1 - Storage

VirtualBox 3.1 - Storage

The networking settings GUI in the new VirtualBox has change too, not only that but you can now configure the network interfaces whilst the guest is running – YAY!

VirtualBox 3.1 Network Settings

VirtualBox 3.1 Network Settings

Snapshots are a lot more flexible in this release (much like VMWare’s snapshot feature). Previously you can only restore from the last created snapshot, now any arbitrary snapshot can be restored too or branched off.

For those who use OpenSolaris (like yours truely!) the rewritten USB support (still experimental btw!) should mean we can interact with our USB devices in Solaris Nevada 124 or higher now – I’m running 127 and have USB devices appearing in my VMs.

If those don’t give you any indication on to the pure awesomeness of this release, there was a significant performance improvement for APE & AMD64 guests (VT-x/AMD-V) which will be quite noticeable from what I’ve been told by a college.

As Barack Obama said, tis time for a change..log.

He didn’t say that, I just reused 36 Mafia’s Lolli Lolli. The entire change log appears below from the website.

VirtualBox 3.1.0 (released 2009-11-30)

This version is a major update. The following major new features were added:

  • Teleportation (aka live migration); migrate a live VM session from one host to another (see the manual for more information)
  • VM states can now be restored from arbitrary snapshots instead of only the last one, and new snapshots can be taken from other snapshots as well (“branched snapshots”; see the manual for more information)
  • 2D video acceleration for Windows guests; use the host video hardware for overlay stretching and color conversion (see the manual for more information)
  • More flexible storage attachments: CD/DVD drives can be attached to an arbitrary IDE controller, and there can be more than one such drive (the manual for more information)
  • The network attachment type can be changed while a VM is running
  • Complete rewrite of experimental USB support for OpenSolaris hosts making use of the latest USB enhancements in Solaris Nevada 124 and higher
  • Significant performance improvements for PAE and AMD64 guests (VT-x and AMD-V only; normal (non-nested) paging)
  • Experimental support for EFI (Extensible Firmware Interface; see the manual for more information)
  • Support for paravirtualized network adapters (virtio-net; see the manual for more information)

In addition, the following items were fixed and/or added:

  • VMM: guest SMP fixes for certain rare cases
  • GUI: snapshots include a screenshot
  • GUI: locked storage media can be unmounted by force
  • GUI: the a log window grabbed all key events from other GUI windows (bug #5291)
  • GUI: allow to disable USB filters (bug #5426)
  • GUI: improved memory slider in the VM settings
  • GUI: the VirtualBox website couldn’t be opened from the help menu (bug #4559)
  • 3D support: major performance improvement in VBO processing
  • 3D support: added GL_EXT_framebuffer_object, GL_EXT_compiled_vertex_array support
  • 3D support: fixed crashes in FarCry, SecondLife, Call of Duty, Unreal Tournament, Eve Online (bugs #2801, #2791)
  • 3D support: fixed graphics corruption in World of Warcraft (#2816)
  • 3D support: fixed Final frame of Compiz animation not updated to the screen (#4653)
  • 3D support: fixed incorrect rendering of non ARGB textures under compiz
  • iSCSI: support iSCSI targets with more than 2TiB capacity
  • VRDP: fixed occasional VRDP server crash (bug #5424)
  • Network: fixed the E1000 emulation for QNX (and probably other) guests (bug #3206)
  • NAT: added host resolver DNS proxy (see the manual for more information)
  • VMDK: fixed incorrectly rejected big images split into 2G pieces (bug #5523, #2787)
  • VMDK: fixed compatibility issue with fixed or raw disk VMDK files (bug #2723)
  • VHD: fixed incompatibility with Hyper-V
  • Support for Parallels version 2 disk image (HDD) files; see the manual for more information
  • OVF: create manifest files on export and verify the content of an optional manifest file on import
  • OVF: fixed memory setting during import (bug #4188)
  • Mouse device: now five buttons are passed to the guest (bug #3773)
  • VBoxHeadless: fixed loss of saved state when VM fails to start
  • VBoxSDL: fixed crash during shutdown (Windows hosts only)
  • X11 based hosts: allow the user to specify their own scan code layout (bug #2302)
  • Mac OS X hosts: don’t auto show the menu and dock in fullscreen (bug #4866)
  • Mac OS X hosts (64 bit): don’t interpret mouse wheel events as left click (bug #5049)
  • Mac OS X hosts: fixed a VM abort during shutdown under certain conditions
  • Solaris hosts: combined the kernel interface package into the VirtualBox main package
  • Solaris hosts: support for OpenSolaris Boomer architecture (with OSS audio backend).
  • Shared folders: VBOXSVR is visible in Network folder (Windows guests, bug #4842)
  • Shared folders: performance improvements (Windows guests, bug #1728)
  • Windows, Linux and Solaris Additions: added balloon tip notifier if VirtualBox host version was updated and Additions are out of date
  • Solaris guests: fixed keyboard emulation (bug #1589)
  • Solaris Additions: fixed as_pagelock() failed errors affecting guest properties (bug #5337)
  • Windows Additions: added automatic logon support for Windows Vista and Windows 7
  • Windows Additions: improved file version lookup for guest OS information
  • Windows Additions: fixed runtime OS detection on Windows 7 for session information
  • Windows Additions: fixed crash in seamless mode (contributed by Huihong Luo)
  • Linux Additions: added support for uninstalling the Linux Guest Additions (bug #4039)
  • Linux guest shared folders: allow mounting a shared folder if a file of the same name as the folder exists in the current directory (bug #928)
  • SDK: added object-oriented web service bindings for PHP5

Overall this is a solid new release from Sun – unsure about its stability as I’ve only been running a few VMs (Windows 2003, CentOS and Fedora 12) for about 10-12hrs. Nothing bad as yet.

Download from the VirtualBox site:

  • VirtualBox 3.1.0 for Windows hosts x86/amd64
  • VirtualBox 3.1.0 for Solaris and OpenSolaris hosts x86/amd64

Enjoy!

{lang: 'en-GB'}
Share

In the Zone, Creating OpenSolaris Zones.

November 22nd, 2009 No comments

I’m really enjoying using OpenSolaris as our server / NAS at home, its a different ball game to Linux but an interesting one never the less. One of the cool features of Solaris are the Solaris¬† Zones (or Solaris Containers). Zones are an implementation of operating system-level virtualisation where the kernel isolates multiple instances of the user-space available. Something like chroot but so much more. Unlike running under a hypervisor (like VMWare or VirtualBox), Zone’s have very little (if any) overhead.

As I’ve come to realise, because of the way Solaris works in general, you can have multiple (isolated & secure) Zones for each application service exposed by the server – eg. one for Tomcat, one for Glassfish, maybe both Apache 1.3.x and 2.x, MySql, Postgres etc. Whats more, you can limit how much resources these Zones can utilise. They all have their own configuration including network routing (coupled with OpenSolaris Crossbow) and you can make for one kick ass setup that won’t break another area of the operating system.

In the Zones.

Here’s a guide on setting up a new Zone in OpenSolaris, configuring it and booting it.

Me Against the Music, its all in the global zone

When we first install OpenSolaris we’ve already got ourselves into a zone (the parent to all other zones) which is known as the global Zone.

You can find this by trying out the following to list all the available zones on a virgin install of OpenSolaris.

opensolaris# zoneadm list -vc
 ID NAME             STATUS     PATH                           BRAND    IP
 0 global           running    /                              native   shared

The output will be something like above. Now we can go about creating ourselves a zone for playing around in.

When working with zones, we only need to worry about three commands (damn I love that!). The zoneadm command to manage the physical zone, zonecfg command for configuring the zone and zlogin to login to the zone from the global zone.

First we have to do a bit of planning and thinking about what we’re going to do about this zone.

Here are few things to consider:

  • What do you want to run in the zone?
  • Will it need networking and have it exposed outside of the machine?
  • Where will the zone reside on your disk?
  • Would you like to limit the amount of CPUs the zone can see?
  • Would you like to limit the amount of RAM the zone can utilise?
  • Do you want to automatically boot the Zone when OpenSolaris starts?

For this post, we’re going to create a simple Zone (we won’t install anything).

Toxic Zone

Creating a zone we specify a zone to the zonecfg command.

opensolaris# zonecfg -z toxic

You’ll get something like this appearing because teh zone doesn’t exist, thats fine.

toxic: No such zone configured
Use 'create' to begin configuring a new zone.

Then you will be inside the zonecfg configuration.

Lets configure this zone to have the following:

  • Reside in /base/zones/
  • Autoboot with OpenSolaris
  • Shared IP of 192.168.0.24 bound to physical interface e1000g1

Follow me:

zonecfg:toxic> create
zonecfg:toxic> set zonepath=/base/zones/
zonecfg:toxic> set autoboot=true
zonecfg:toxic> add net
zonecfg:toxic:net> set address=192.168.0.24
zonecfg:toxic:net> set physical=e1000g1
zonecfg:toxic:net> end
zonecfg:toxic> verify
zonecfg:toxic> commit
zonecfg:toxic> exit

This will create the configuration, verify, write it and exit. You can verify it was created by running the list command again:

opensolaris# zoneadm list -vc
ID NAME             STATUS         PATH
0 global           running        /
- toxic            configured     /base/zones

Its currently in a configured state, you can read more about the Non-Global State Model in the documentation. Next thing to do is to install the zone – this will get the base packages setup and configured for use.

opensolaris# zoneadm -z toxic install

Everytime, boot her up.

Next lets boot this bad baby up.

opensolaris# zoneadm -z toxic boot

Now if we do a list again we’ll see that our state has changed to running.

opensolaris# zoneadm list -vc
ID NAME             STATUS         PATH
0 global           running        /
- toxic            running        /base/zones

Now we have to configure the zone itself – just like a real machine. For this we use the zlogin command to login to the zone console.

opensolaris# zlogin toxic
[Connected to zone 'toxic' pts/5]
Last login: Sat Nov 21 17:52:43 on pts/5
Sun Microsystems Inc.   SunOS 5.11      snv_127 November 2008
root@toxic#

After that we’re now in the toxic zone. Anything we do inside here, stays within this zone and won’t affect our global or other zones. But before we continue we really should configure our networking.

First lets modify our /etc/nsswitch.conf file with vi.

...
passwd:     files
group:      files
hosts:      files dns
ipnodes:    files
networks:   file
...

Make sure the hosts entry has dns as above. Next we need to configure the nameservers.

toxic# echo 'nameserver 192.168.0.254' > /etc/resolv.conf

That will create a resolv.conf file with the nameserver which you can get from the global zone as it would be different for everyone:

opensolaris# cat /etc/resolv.conf
nameserver 192.168.0.254

Breath on me, reboot the zone.

Now we can access the networking like the global zone. So you can do a package refresh and update-image too.

toxic# pkg refresh && pkg image-update

If it succeeds we have correctly setup our zone and its ready for use – you may want to reboot the zone however. To do this, exit the toxic console.

toxic# exit
logout

[Connection to zone 'toxic' pts/5 closed]
opensolaris#

Then lets reboot the zone.

opensolaris# zoneadm -z toxic reboot
opensolaris# zlogin toxic
[Connected to zone 'toxic' pts/5]
Last login: Sat Nov 21 17:58:44 on pts/5
Sun Microsystems Inc.   SunOS 5.11      snv_127 November 2008
root@toxic#

Outrageous, removing the zones.

Now how about removing this zone and trying again? First get out of the zone console and back to your global zone. Issue the halt command to shutdown the zone.

root@toxic# exit
opensolaris# zoneadm -z toxic halt

Once stopped simply remove it.

opensolaris# zoneadm -z toxic uninstall
opensolaris# zonecfg -z toxic delete

You can make sure its gone by using the list command. That’s all there is to it!

Now you can consider yourself, In The Zone.

{lang: 'en-GB'}
Share

Part III: Zeus rebuilt and configured!

November 21st, 2009 1 comment

I’ve spent the last month working with the newly built zeus server which is now powered by OpenSolaris (2009.06).

Here’s my final hardware specifications:

  • CPU: AMD Athlon X2 5050e – 2.6Ghz (45W TDP, AMD-V)
  • Motherboard: Gigabyte GA-MA790X-UD4P ( AMD 790X Chipset )
  • RAM: 2x Corsair TWIN2X4096-6400C5 (4Gb kit x 2 = 8Gb)
  • Graphics: ASUS 9400GT PCI-Express
  • Hard Disks:
    • rpool – 2x WD740ADFD – 74Gb 10K RPM, 16Mb Cache (mirror’d)
    • tank – 6x WD1002FBYS – 1TB, 7200RPM, 32Mb Cache (raidz)
    • base – 2x WD7500AAKS – 750Gb, 7200RPM, 16Mb (mirror’d)
  • Addon cards:
    • SATA – Silicon Image, Inc. SiI 3132 Serial ATA Raid II Controller
    • NICs – 2x Intel Corporation 82545GM Gigabit Ethernet Controller (e1000g)

I’ve finally managed to get the GA-MA790X-UD4P on the OpenSolaris HCL list – woo! Unfortunately the onboard NIC will not work in the 2009.06 release even though it is detected:

Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller

Maybe in a future release. Make sure you update the BIOS as OpenSolaris may have an issue with the USB controller being ‘mis-configured’ otherwise.

Just for kicks I went to Jaycar and bought myself a power usage meter to measure the watts used by the new boxen (see a review of the Mains Power Meter on DansData).

Old Zeus

  • Idle: 380W
  • Load: 413W

New Zeus

  • Idle: 232W
  • Load: 270W

Nice, with an Intel Atom based server it could go _a lot_ lower, but I’m happy with this.

{lang: 'en-GB'}
Share

VirtualBox 3.0.12 released!

November 18th, 2009 No comments

VirtualBox 3.0.12 has been released.

VirtualBox 3.0.12 (released 2009-11-17)

This is a maintenance release. The following items were fixed and/or added:

  • VMM: reduced IO-APIC overhead for 32 bits Windows NT/2000/XP/2003 guests; requires 64 bits support (VT-x only; bug #4392)
  • VMM: fixed double timer interrupt delivery on old Linux kernels using IO-APIC (caused guest time to run at double speed; bug #3135)
  • VMM: reinitialize VT-x and AMD-V after host suspend or hibernate; some BIOSes forget this (Windows hosts only; bug #5421)
  • VMM: fix loading of saved state when RAM preallocation is enabled
  • BIOS: ignore unknown shutdown codes instead of causing a guru meditation (bug #5389)
  • GUI: never start a VM on a single click into the selector window (bug #2676)
  • Serial: reduce the probability of lost bytes if the host end is connected to a raw file
  • VMDK: fix handling of split image variants and fix a 3.0.10 regression (bug #5355)
  • VRDP: fixed occasional VRDP server crash
  • Network: even if the virtual network cable was disconnected, some guests were able to send / receive packets (E1000; bug #5366)
  • Network: even if the virtual network cable was disconnected, the PCNet card received some spurious packets which might confuse the guest (bug #4496)
  • Shared folders: fixed changing case of file names (bug #2520)
  • Windows Additions: fix crash in seamless mode (contributed by Huihong Luo)
  • Linux Additions: fix writing to files opened in O_APPEND mode (bug #3805)
  • Solaris Additions: fix regression in guest additions driver which among other things caused lost guest property updates and periodic error messages being written to the system log

Download it from the Sun VirtualBox download page.

  • VirtualBox 3.0.12 for Windows hosts x86/amd64
  • VirtualBox 3.0.12 for Solaris and OpenSolaris hosts x86/amd64

Woot!

{lang: 'en-GB'}
Share

CentOS 5.4 Released!

October 23rd, 2009 No comments

CentOS 5.4 has been released! Woo yeah, its been a while since RHEL 5.4 has been out but checkout the release notes for a list of changes.

Download mirrors are being updated but if your local, here are a couple of Australian Mirrors.

CentOS 5.4 x86

CentOS 5.4 x64

I just did a inplace 5.3->5.4 upgrade with a yum update. With a localised mirror, blindingly fast too!

{lang: 'en-GB'}
Share

Part II: Rebuilding ZEUS – The Operating System, FileSystem & Virtualisation

October 18th, 2009 No comments

Now that I’ve decided what I want out of the server (and the hardware I’ve got), its time to workout what operating system to run the system on. Currently, ZEUS is running on Ubuntu Gutsy (7.10) which is running LVM with an XFS volume holding approximately 2.5Tb worth of data. There’s a cron job that defrags the XFS volume to keep things in order.

The Operating System

As the operating system is no longer maintained (my oversight into how long it would survive) I have to find an OS that supports the hardware platform without hacky hacky bits (and by this I mean avoiding buggy ACPI and issues with the NForce4 chipset and IRQ problems) and has a file system that will benefit long term.

There were a few considerations:

  • Ubuntu 8.04.x LTS
    I like Ubuntu, I’m comfortable with the user land and find the Debian package system (in particular the dependency resolving) most impressive. Hardware is well supported and 8.04.3 (at the time of writing) boots on the hardware I originally selected (Intel) and the new configuration I recently selected (AMD). I could most definitely use Ext4 but the problems with data-loss (which I’ve reproduced on several occasions on desktop machines) scare me.FileSystem: I’d have to adopt either XFS or Ext4 on an LVM to factor in future-proofing, maybe get some fakeRAID happening for redundancy.
    Installation
    : comes with a Server edition that’s bare bones allowing it to be a minimalistic installation which is always nice!
  • Ubuntu 9.04
    Initially when I started to rebuild Zeus back in April I wanted to use Ubuntu 9.04, I was really excited about Ext4 and the promise of a brand-spanking new file-system and what it would bring to the table. Unfortunately after using Ext4 with 9.04 I’ve come to realise its probably not the wisest to trust your data with it just yet – unless you get yourself a UPS! Laptop seems to be chugging nicely though.Installation: Like LTS, comes with a Server edition that’s bare bones allowing it to be a minimalistic installation which is always nice! (copy/paste!) Unfortunately picking 9.04 when 9.10 is just around the corner is not going to be ideal, I’ll be stuck with where I am right now in a year or so.

So in case the sudden influx of OpenSolaris posts didnt give you the hint, I decided on OpenSolaris to power the new iZeus 2.0, actually no that sounds lame, zeusy will be the new ZEUS until ZEUS is retired in which case zeusy becomes zeus (confused?).

Why ZFS?

ZFS is one of those file-systems you look at and think, wow! Why didn’t anyone else think of that before?

  • Very simple administration – you only use two commands, zpool and zfs.
  • Highly scalable – 128-bit means we can hold 16 exabytes or 18 Million terabytes worth of data! More porn for you! XFS can no doubt handle the TBs we use for our home boxes now, but no-chance you can get the performance or benefits of ZFS in Ext3/Ext4 or XFS.
  • Data integrity to heal a filesystem (no fsck’ing around!) – 256bit checksuming to protect data, if ZFS detects a problem it will attempt to reconstruct the bad block and continue on its merry way (utilising available redundancy)
  • Compression – you can elect to compress a particular file-system or a hierarchy just by setting one command! I’m thinking things like logs here.
  • No hardware dependency – JBOD on a controller, let ZFS maintain the RAID volumes in software. Checkout Michael Pryc’s crazy adventure with ZFS using USB thumb drives and¬†Constantin’s original voyage with USB drives! RAID-Z is essentially RAID-5 without the write-hole problems has plagued it if power is lost during a write, it can also survive a loss of a drive (with RAIDZ-2 you can loose two drives).
  • Happy snaps for free! Snapshot (a live) file-system as many times as you like, again one easy command. Its like that tendency to hit {CTRL+S} when your working in Windows from back in the days of Windows 9x, snapshot regularly!

So ZFS sounds much like marketing spiel right now, best thing since sliced bread, cooler than a cucumber, and you’d be right it is cool and the best thing since filesystems came to being. Over the coming days I’ll post some more on my musings with ZFS – keeping in mind that I’m still learning these things. It helps to have lots of hardware to play with, but even if you don’t, you can knock up a virtual version of OpenSolaris in VirtualBox, create some virtual disks and try it out.

There are a few caveats that I’ve come across though using ZFS, one is memory! ZFS will try and cache as much data as it can in RAM, so if you have 8Gb of RAM (as I have in this box) it will happily use as much of it as it can afford. Rightfully so, I was getting ~96MB/s transfering a 16Gb MPEG from one box to the other over our Gig link (thats from one end of the house to the other!) mind you this was just a test configuration using 2x 74Gb Western Digital Raptors (WD740ADFD) in a RAID-0 style hitting a single 150Gb Western Digital Raptor (WD1500ADFD). They could have gone much higher, but I was happy with that.

There are also (as of writing) no recovery tools for ZFS, but these are slated to arrive soon (Q4 2009) which is quite scary after you read this post about a guy loosing 10Tb worth of data, however a possible revert to an older uberblock may fix some problems.

Virtualisation

Initially I wanted to concentrate quite a bit on Virtualisation, I tried Xen on OpenSolaris. It was quite easy to setup a Xen Dom0 in OpenSolaris but with the 2009.06 release you had to tweak the Xen setup a bit. I wasn’t too enthusiastic about using Xen after seeing the performance lag in Windows in my musings. Instead I’m opting for my crush, VirtualBox.

So why use VirtualBox when you can get a bare-metal hypervisor? Firstly, performance seems to be sluggish with Xen for me (I didn’t investigate this too much), secondly I want to be able to run the latest and greatest OS’s out without worrying about upgrading Xen (I’m a sucker for OS’s!). VirtualBox development has accelerated at a feverish pace, I started with VirtualBox 1.3 in 2007 and its come an insanely long way since then. When a new release comes along, its as easy as updating VirtualBox and getting all the benefits. Plus with SunOracle‘s backing of VirtualBox you know things are going to work well on OpenSolaris, the Extras repository of VirtualBox makes it as easy as doing a pkg update.

I’m still quite intrigued by the way KVM is heading and how it will pan out, but for the future zeus, it will be VirtualBox.

{lang: 'en-GB'}
Share

Rebuilding Zeus – Part I.5: Change of heart, change of hardware.

October 14th, 2009 No comments

After a bit of digging around, my original spec’d hardware I’ve decided is too much for a boxen that will be on 24×7, especially with the rates for electricity going up next year – every little Watt counts. The existing 65W CPU isn’t ideal, instead I’m opting for a 45W CPU instead and this means – looking at the lineup, its going to be a walk down AMD way. Less watts, less heat and less noise, noice! See AMD’s product roadmap for 2010-2011.

The original specifications I mentioned were:

I’ve decided to change the CPU and Motherboard but keep the other bits and bobs – I could loose the graphics card and go onboard but I felt like leaving it there for now. The target budget is $250 maximum for both CPU+Mobo, so this means I’m sticking with DDR2 which implies AM2+ but it must also satisfy:

  • CPU has to be 45W and be atleast 1.6Ghz, dual core no more, has to support Virtualization.
  • Motherboard has to Support 8Gb (most boards doo!),¬† have atleast 2x¬† PCIe and a PCI slot, it would be nice if the network cards work (gigabit) but no fuss if it doesnt. No crazy shebangabang Wifi, remotes etc bling and if it has onboard Video great, otherwise its OK to use a crappy card.

I picked the AMD Athlon X2 5050e CPU because it was cheap (~$80), supports a 45W, has virtualisation and is an AM2. Next was the motherboard, looking at the ASUS, Gigabyte & XFX models as my target.

Chipset wise only the following fit the criteria for a possible match because others just don’t have the number of SATA ports available onboard. Primarily AMD boards are supplied by NVIDIA or AMD themselves.

Initially I looked at the ASUS¬† boards (they’ve been nothing but rock solid for me in the past) but after a lot of research scouring through the manufacturer sites I ended up picking out the Gigabyte GA-MA790X-UD4P which is based on the AMD 790X Chipset. The board came with 8x SATA Ports, 3x PCIe and 2x PCI and a¬† Gigabit NIC all for a $137 from PCCaseGear. Not only was the power consumption lowered but the noise and heat generated was substantially lower too!

Coming in close was the ASUS M4N78 PRO or the ASUS M4A78 PRO, each of those unfortunately didn’t have as many SATA ports (2-less) nor the PCIe ports (1-less).

GA-MA790X-UD4P
{lang: 'en-GB'}
Share

Part I: Rebuilding ZEUS, the journey of training the next home server

October 6th, 2009 No comments

I’ve been looking at upgrading our existing home server from the archaic (and unsupported!) Ubuntu Gutsy (because I was feeling gutsy at the time) to something newer, fresher and that will last me atleast another 2 years. This is purely for my documentation.

Current Setup

Currently running an AMD setup with Ubuntu Gutsy (7.10) – I didn’t think it would last this long, honest! Ubuntu 6.06 had too many issues with the hardware/driver incompatibilities.

DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=7.10
DISTRIB_CODENAME=gutsy
DISTRIB_DESCRIPTION="Ubuntu 7.10"

On an ASUS A8N-SLI Deluxe motherboard (because you know, servers need SLI!) sporting a AMD Athlon64 3200+ (the only AMD CPU at home!) with 2Gb of RAM (hey, DDR1 wasn’t cheap enough!)

lspci

00:00.0 Memory controller: nVidia Corporation CK804 Memory Controller (rev a3)
00:01.0 ISA bridge: nVidia Corporation CK804 ISA Bridge (rev f3)
00:01.1 SMBus: nVidia Corporation CK804 SMBus (rev a2)
00:02.0 USB Controller: nVidia Corporation CK804 USB Controller (rev a2)
00:02.1 USB Controller: nVidia Corporation CK804 USB Controller (rev a3)
00:04.0 Multimedia audio controller: nVidia Corporation CK804 AC'97 Audio Controller (rev a2)
00:06.0 IDE interface: nVidia Corporation CK804 IDE (rev f2)
00:07.0 IDE interface: nVidia Corporation CK804 Serial ATA Controller (rev f3)
00:08.0 IDE interface: nVidia Corporation CK804 Serial ATA Controller (rev f3)
00:09.0 PCI bridge: nVidia Corporation CK804 PCI Bridge (rev f2)
00:0a.0 Bridge: nVidia Corporation CK804 Ethernet Controller (rev f3)
00:0b.0 PCI bridge: nVidia Corporation CK804 PCIE Bridge (rev f3)
00:0c.0 PCI bridge: nVidia Corporation CK804 PCIE Bridge (rev f3)
00:0d.0 PCI bridge: nVidia Corporation CK804 PCIE Bridge (rev f3)
00:0e.0 PCI bridge: nVidia Corporation CK804 PCIE Bridge (rev a3)
00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration
00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map
00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller
00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control
01:00.0 VGA compatible controller: nVidia Corporation G70 [GeForce 7300 GT] (rev a1)
05:06.0 RAID bus controller: Silicon Image, Inc. SiI 3114 [SATALink/SATARaid] Serial ATA Controller (rev 02)
05:07.0 RAID bus controller: Silicon Image, Inc. Adaptec AAR-1210SA SATA HostRAID Controller (rev 02)
05:0a.0 RAID bus controller: Silicon Image, Inc. SiI 3114 [SATALink/SATARaid] Serial ATA Controller (rev 02)
05:0b.0 FireWire (IEEE 1394): Texas Instruments TSB43AB22/A IEEE-1394a-2000 Controller (PHY/Link)
05:0c.0 Ethernet controller: Marvell Technology Group Ltd. 88E8001 Gigabit Ethernet Controller (rev 13)

/proc/cpuinfo

processor       : 0
vendor_id       : AuthenticAMD
cpu family      : 15
model           : 47
model name      : AMD Athlon(tm) 64 Processor 3200+
stepping        : 2
cpu MHz         : 1000.000
cache size      : 512 KB
fdiv_bug        : no
hlt_bug         : no
f00f_bug        : no
coma_bug        : no
fpu             : yes
fpu_exception   : yes
cpuid level     : 1
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt lm 3dnowext 3dnow up pni lahf_lm ts fid vid ttp tm stc
bogomips        : 2011.59
clflush size    : 64

This faithful boxen has been the primary source of our fileserver (XFS+LVM 3Tb) – used internal to our house and also by others who upload their stuff to be backed up. Subversion repositories, Apache/LightHttpd test servers for PHP work, Virtualisation for Windows 2003, 2000 and SqlServers running for testing and several other things (think: TeamCity, Continous Integration tools, Confluence etc). Its also been damn convenient when your at work or on holidays to be able to login, muse about via SSH and even fix things remotely.

Needs & Wants

The new server will need to fufil the following roles:

  • Function as a NAS to continue to offer backup (via users home directories) and storage options
    • No file-system constraints asside from no Ext3 or ReiserFS.
  • Offer the ability to still run Virtual Machines, need to virtualise CentOS, Ubuntu and Windows for testing, they’ll be running in¬† Bridged mode
  • No real need for a Gui (I can consider myself a little l33t than a few years ago)
  • Run a Subversion repository (not that hard!)

The idea is to have a bare bones operating system install and have the virtual machines handle the hard and ugly work – webservers to test things, servers to try development deployments (java) and other bits and pieces. The core OS just has to manage the NAS and allow the ability to SSH in to offer subversion access.

Hardware

The hardware I’ve picked from things I had around the place, the only thing I’ve bought is just new sticks of RAM.

  • Motherboard: ASUS P5QL-PRO
    This board offered some excellent specifications via the P43 chipset, the things I looked for was the number of SATA ports ‘out of the box’ – 6 native SATA2, the number of 1x PCIe slots (2!) for future addions of PCIe SATA adapters and the maximum amount of memory possible (8Gb). Oh ofcourse, something cheapy and that can run the CPU I had around. A Gigabit NIC was also important (dual would be better!) but if it wasn’t supported I had a trusty Intel PRO 1000MT Server PCI cards to fill the void – almost everything supports them (e1000)!
  • CPU: Intel Core-2 E6750 – 2.66Ghz (65W TDP, VT)
    Importance was Intel-VT support, low TDP and a dualcore thats not too high.
  • RAM: Corsair TWIN2X4096-6400C5 (4Gb kit x 2 = 8Gb)
    Cheapy cheapy, twice the fun of a regular kit, slightly higher CAS, but who CAreS this isnt being overclocked.
  • Graphics: ASUS 9400GT PCI-Express
    The cheapest graphics card to be found at the legendary& award winning computer store MSY Technologies. Depending on how the drivers go (I’m usually biased towards ATI for all Linuxes) I might endup paying for a ATi card later.

Next up the investigation, be warned though I started this initially back in June/July (possibly a bit earlier).

{lang: 'en-GB'}
Share