Archive

Posts Tagged ‘amd’

Oracle releases VirtualBox 3.2

May 20th, 2010 1 comment

With the Sun now set, Oracle has released VirtualBox 3.2 finally :-) In particular some lovely optimisations for the newer Intel Core i5/i7 processors, Large  Page support (which helps significantly on Windows x64 and Linux) as well as a very welcome optimisation on the networking in VirtualBox as well as multi-monitor support for Windows Guests. Whats more RDP sessions are now accelerated (VRDP).

Amongst the changes from the changelog:

This version is a major update. The following major new features were added:

  • Following the acquisition of Sun Microsystems by Oracle Corporation, the product is now called Oracle VM VirtualBox and all references were changed without impacting compatibility
  • Experimental support for Mac OS X guests (see the manual for more information)
  • Memory ballooning to dynamically in- or decrease the amount of RAM used by a VM (64-bit hosts only) (see the manual for more information)
  • Page Fusion automatically de-duplicates RAM when running similar VMs thereby increasing capacity. Currently supported for Windows guests on 64-bit hosts (see the manual for more information)
  • CPU hot-plugging for Linux (hot-add and hot-remove) and certain Windows guests (hot-add only) (see the manual for more information)
  • New Hypervisor features: with both VT-x/AMD-V on 64-bit hosts, using large pages can improve performance (see the manual for more information); also, on VT-x, unrestricted guest execution is now supported (if nested paging is enabled with VT-x, real mode and protected mode without paging code runs faster, which mainly speeds up guest OS booting)
  • Support for deleting snapshots while the VM is running
  • Support for multi-monitor guest setups in the GUI for Windows guests (see the manual for more information)
  • USB tablet/keyboard emulation for improved user experience if no Guest Additions are available (see the manual for more information).
  • LsiLogic SAS controller emulation (see the manual for more information)
  • RDP video acceleration (see the manual for more information)
  • NAT engine configuration via API and VBoxManage
  • Use of host I/O cache is now configurable (see the manual for more information)
  • Guest Additions: added support for executing guest applications from the host system (replaces the automatic system presimparation feature; see the manual for more information)

Download from VirtualBox or get the Windows build. I’m really hoping the good Oracle keeps VirtualBox open, this is one kickass bit of kit.

{lang: 'en-GB'}
Share

Part III: Zeus rebuilt and configured!

November 21st, 2009 1 comment

I’ve spent the last month working with the newly built zeus server which is now powered by OpenSolaris (2009.06).

Here’s my final hardware specifications:

  • CPU: AMD Athlon X2 5050e – 2.6Ghz (45W TDP, AMD-V)
  • Motherboard: Gigabyte GA-MA790X-UD4P ( AMD 790X Chipset )
  • RAM: 2x Corsair TWIN2X4096-6400C5 (4Gb kit x 2 = 8Gb)
  • Graphics: ASUS 9400GT PCI-Express
  • Hard Disks:
    • rpool – 2x WD740ADFD – 74Gb 10K RPM, 16Mb Cache (mirror’d)
    • tank – 6x WD1002FBYS – 1TB, 7200RPM, 32Mb Cache (raidz)
    • base – 2x WD7500AAKS – 750Gb, 7200RPM, 16Mb (mirror’d)
  • Addon cards:
    • SATA – Silicon Image, Inc. SiI 3132 Serial ATA Raid II Controller
    • NICs – 2x Intel Corporation 82545GM Gigabit Ethernet Controller (e1000g)

I’ve finally managed to get the GA-MA790X-UD4P on the OpenSolaris HCL list – woo! Unfortunately the onboard NIC will not work in the 2009.06 release even though it is detected:

Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller

Maybe in a future release. Make sure you update the BIOS as OpenSolaris may have an issue with the USB controller being ‘mis-configured’ otherwise.

Just for kicks I went to Jaycar and bought myself a power usage meter to measure the watts used by the new boxen (see a review of the Mains Power Meter on DansData).

Old Zeus

  • Idle: 380W
  • Load: 413W

New Zeus

  • Idle: 232W
  • Load: 270W

Nice, with an Intel Atom based server it could go _a lot_ lower, but I’m happy with this.

{lang: 'en-GB'}
Share

Part II: Rebuilding ZEUS – The Operating System, FileSystem & Virtualisation

October 18th, 2009 No comments

Now that I’ve decided what I want out of the server (and the hardware I’ve got), its time to workout what operating system to run the system on. Currently, ZEUS is running on Ubuntu Gutsy (7.10) which is running LVM with an XFS volume holding approximately 2.5Tb worth of data. There’s a cron job that defrags the XFS volume to keep things in order.

The Operating System

As the operating system is no longer maintained (my oversight into how long it would survive) I have to find an OS that supports the hardware platform without hacky hacky bits (and by this I mean avoiding buggy ACPI and issues with the NForce4 chipset and IRQ problems) and has a file system that will benefit long term.

There were a few considerations:

  • Ubuntu 8.04.x LTS
    I like Ubuntu, I’m comfortable with the user land and find the Debian package system (in particular the dependency resolving) most impressive. Hardware is well supported and 8.04.3 (at the time of writing) boots on the hardware I originally selected (Intel) and the new configuration I recently selected (AMD). I could most definitely use Ext4 but the problems with data-loss (which I’ve reproduced on several occasions on desktop machines) scare me.FileSystem: I’d have to adopt either XFS or Ext4 on an LVM to factor in future-proofing, maybe get some fakeRAID happening for redundancy.
    Installation
    : comes with a Server edition that’s bare bones allowing it to be a minimalistic installation which is always nice!
  • Ubuntu 9.04
    Initially when I started to rebuild Zeus back in April I wanted to use Ubuntu 9.04, I was really excited about Ext4 and the promise of a brand-spanking new file-system and what it would bring to the table. Unfortunately after using Ext4 with 9.04 I’ve come to realise its probably not the wisest to trust your data with it just yet – unless you get yourself a UPS! Laptop seems to be chugging nicely though.Installation: Like LTS, comes with a Server edition that’s bare bones allowing it to be a minimalistic installation which is always nice! (copy/paste!) Unfortunately picking 9.04 when 9.10 is just around the corner is not going to be ideal, I’ll be stuck with where I am right now in a year or so.

So in case the sudden influx of OpenSolaris posts didnt give you the hint, I decided on OpenSolaris to power the new iZeus 2.0, actually no that sounds lame, zeusy will be the new ZEUS until ZEUS is retired in which case zeusy becomes zeus (confused?).

Why ZFS?

ZFS is one of those file-systems you look at and think, wow! Why didn’t anyone else think of that before?

  • Very simple administration – you only use two commands, zpool and zfs.
  • Highly scalable – 128-bit means we can hold 16 exabytes or 18 Million terabytes worth of data! More porn for you! XFS can no doubt handle the TBs we use for our home boxes now, but no-chance you can get the performance or benefits of ZFS in Ext3/Ext4 or XFS.
  • Data integrity to heal a filesystem (no fsck’ing around!) – 256bit checksuming to protect data, if ZFS detects a problem it will attempt to reconstruct the bad block and continue on its merry way (utilising available redundancy)
  • Compression – you can elect to compress a particular file-system or a hierarchy just by setting one command! I’m thinking things like logs here.
  • No hardware dependency – JBOD on a controller, let ZFS maintain the RAID volumes in software. Checkout Michael Pryc’s crazy adventure with ZFS using USB thumb drives and Constantin’s original voyage with USB drives! RAID-Z is essentially RAID-5 without the write-hole problems has plagued it if power is lost during a write, it can also survive a loss of a drive (with RAIDZ-2 you can loose two drives).
  • Happy snaps for free! Snapshot (a live) file-system as many times as you like, again one easy command. Its like that tendency to hit {CTRL+S} when your working in Windows from back in the days of Windows 9x, snapshot regularly!

So ZFS sounds much like marketing spiel right now, best thing since sliced bread, cooler than a cucumber, and you’d be right it is cool and the best thing since filesystems came to being. Over the coming days I’ll post some more on my musings with ZFS – keeping in mind that I’m still learning these things. It helps to have lots of hardware to play with, but even if you don’t, you can knock up a virtual version of OpenSolaris in VirtualBox, create some virtual disks and try it out.

There are a few caveats that I’ve come across though using ZFS, one is memory! ZFS will try and cache as much data as it can in RAM, so if you have 8Gb of RAM (as I have in this box) it will happily use as much of it as it can afford. Rightfully so, I was getting ~96MB/s transfering a 16Gb MPEG from one box to the other over our Gig link (thats from one end of the house to the other!) mind you this was just a test configuration using 2x 74Gb Western Digital Raptors (WD740ADFD) in a RAID-0 style hitting a single 150Gb Western Digital Raptor (WD1500ADFD). They could have gone much higher, but I was happy with that.

There are also (as of writing) no recovery tools for ZFS, but these are slated to arrive soon (Q4 2009) which is quite scary after you read this post about a guy loosing 10Tb worth of data, however a possible revert to an older uberblock may fix some problems.

Virtualisation

Initially I wanted to concentrate quite a bit on Virtualisation, I tried Xen on OpenSolaris. It was quite easy to setup a Xen Dom0 in OpenSolaris but with the 2009.06 release you had to tweak the Xen setup a bit. I wasn’t too enthusiastic about using Xen after seeing the performance lag in Windows in my musings. Instead I’m opting for my crush, VirtualBox.

So why use VirtualBox when you can get a bare-metal hypervisor? Firstly, performance seems to be sluggish with Xen for me (I didn’t investigate this too much), secondly I want to be able to run the latest and greatest OS’s out without worrying about upgrading Xen (I’m a sucker for OS’s!). VirtualBox development has accelerated at a feverish pace, I started with VirtualBox 1.3 in 2007 and its come an insanely long way since then. When a new release comes along, its as easy as updating VirtualBox and getting all the benefits. Plus with SunOracle‘s backing of VirtualBox you know things are going to work well on OpenSolaris, the Extras repository of VirtualBox makes it as easy as doing a pkg update.

I’m still quite intrigued by the way KVM is heading and how it will pan out, but for the future zeus, it will be VirtualBox.

{lang: 'en-GB'}
Share

Rebuilding Zeus – Part I.5: Change of heart, change of hardware.

October 14th, 2009 No comments

After a bit of digging around, my original spec’d hardware I’ve decided is too much for a boxen that will be on 24×7, especially with the rates for electricity going up next year – every little Watt counts. The existing 65W CPU isn’t ideal, instead I’m opting for a 45W CPU instead and this means – looking at the lineup, its going to be a walk down AMD way. Less watts, less heat and less noise, noice! See AMD’s product roadmap for 2010-2011.

The original specifications I mentioned were:

I’ve decided to change the CPU and Motherboard but keep the other bits and bobs – I could loose the graphics card and go onboard but I felt like leaving it there for now. The target budget is $250 maximum for both CPU+Mobo, so this means I’m sticking with DDR2 which implies AM2+ but it must also satisfy:

  • CPU has to be 45W and be atleast 1.6Ghz, dual core no more, has to support Virtualization.
  • Motherboard has to Support 8Gb (most boards doo!),  have atleast 2x  PCIe and a PCI slot, it would be nice if the network cards work (gigabit) but no fuss if it doesnt. No crazy shebangabang Wifi, remotes etc bling and if it has onboard Video great, otherwise its OK to use a crappy card.

I picked the AMD Athlon X2 5050e CPU because it was cheap (~$80), supports a 45W, has virtualisation and is an AM2. Next was the motherboard, looking at the ASUS, Gigabyte & XFX models as my target.

Chipset wise only the following fit the criteria for a possible match because others just don’t have the number of SATA ports available onboard. Primarily AMD boards are supplied by NVIDIA or AMD themselves.

Initially I looked at the ASUS  boards (they’ve been nothing but rock solid for me in the past) but after a lot of research scouring through the manufacturer sites I ended up picking out the Gigabyte GA-MA790X-UD4P which is based on the AMD 790X Chipset. The board came with 8x SATA Ports, 3x PCIe and 2x PCI and a  Gigabit NIC all for a $137 from PCCaseGear. Not only was the power consumption lowered but the noise and heat generated was substantially lower too!

Coming in close was the ASUS M4N78 PRO or the ASUS M4A78 PRO, each of those unfortunately didn’t have as many SATA ports (2-less) nor the PCIe ports (1-less).

GA-MA790X-UD4P
{lang: 'en-GB'}
Share

Rebuilding Zeus: Part 1 – Preliminary Research and Installing Ubuntu 9.04 RC1

April 19th, 2009 1 comment

Just spent a fair chunk of today getting a rebuild of Zeus going – our affectionately dubbed Ubuntu server at home. This is the third rebuild (hardware wise) in the past 5 years (sheesh its been that long?), but I’m not complaining. First Ubuntu’fied version (5.10 – Breezy Badger) ran on an Pentium 4 3Ghz (Socket 478), noisey little guy that sucked quite a bit of power which was my old development box  that served me well.

Then with the release of the fornicating Feisty Fawn (Ubuntu 7.04) I moved over the server to an AMD box, a AMD 3200+ on a ASUS A8N-SLI Deluxe (which featured the incredibly shakey NForce 4 SLI chipset) with a modest 2Gb of DDR ram.

NVIDIA nForce4 APIC Woes

Unfortunately I didn’t realise that by using the NForce 4 chipset under Linux I’d have to wrestle with APIC issues due to an issue with the chipset and regressions.

If you fall into the above hole, edit your grub boot menu:

$ sudo vi /boot/grub/menu.lst

And change your booting kernel with two new options:

title           Ubuntu 7.10, kernel 2.6.22-14-generic
root            (hd0,5)
kernel          /vmlinuz-2.6.22-14-generic root=UUID=c7a7bf0a-714a-482e-9a07-d3ed40f519f5 ro quiet splash noapic nolapic
initrd          /initrd.img-2.6.22-14-generic
quiet

You may want to also add that to the recovery kernel just incase. This will effectively disable the onboard APIC Controller as its quite buggy. More information is available on Launchpad.

Its been chugging along nicely for the past 2 years – the time is always in accurate (about 8 minutes ahead) but the uptime right now is:

thushan@ZEUS:~$ uptime
19:54:06 up 147 days,  7:27,  7 users,  load average: 0.22, 0.43, 0.32

So I figured its time to put these issues behind and redo the server infrastructure at home.

Goals

There are some goals in this rebuild.

  • Try out Ext4 and remove the use of ReiserFS and JFS which don’t seem to be going anywhere (JFS here and here). ZFS would be nice (but no FUSE!) to try out, but I’m hoping Btrfs brings some niceties to the table.
  • The new Zeus needs to look at virtualisation a little more. Right now, alot of the QA for Windows builds of our stuff is done on several machines all over the place. Consolidate them to 1 Server with VT support, plenty of RAM and use a hypervisor (mentioned later) to manage testing.
  • Provide the same services as the existing Zeus:
    • SVN + Trac
    • Apache
    • MySQL / Postgres
    • File hosting, storing vault sharing content across the computers around (the whole house is gigabitted).
    • Fast enough to run dedicated servers for Unreal Tournament, Quake, Call of Duty 4 and a few other games.
    • Profiles, user data needs to be migrated
  • Messing about with the Cloud-Computing functionality in Jaunty.
  • Provide a backend for the Mythbuntu frontends.
  • Last another 2 years

Hardware

My previous workstation motherboard was the awesome ASUS P5W-DH Deluxe with a Intel QX6850 CPU, powered by the Intel 975 Chipset that has lasted for alot longer than anyone had predicted. But earlier this year I had a problem with the board that warranted a RMA request. As I had to have a machine I ended up buying an ASUS P5Q-Pro and did a re-install (same CPU). So instead of selling off the P5WDH I’ve decided to use that board coupled with a Intel E6750 which was picked because it supports Intel VT and it was lying around. Otherwise I _wouldnt_ consider using this setup – overkill!!! But I do want this setup to last and be beefy enough to support a little more than a few VM’s running concurrently.

Pretty shots are available here. Otherwise, the test bench, the tuniq and a pretty shot of my setup at home (no its not clean).

Software

Clearly Ubuntu  9.04 is where its at, its sleeker, blindingly fast to boot thanks to the boot time optimisations and sexier desktop thanks to the visual tweaking and the new Gnome 2.26 inclusion. The installer has matured greatly, gone is the plain old boring partition editor based on GParted and a sleek new timezone picker. To make the most of the RAM in the box, 64bit edition of Ubuntu-desktop is what I’m installing.

Installing Ubuntu, use a UNetbootin!

So you grabbed the latest ISO, burn and chuck it into an optical drive and way you go aye… *IF THIS WAS 2005*!!! As mentioned in an earlier post, grab a copy of UNetbootin, select the ISO you mustered from your local free ISP mirror and throw it inside your USB thumb drive. These days USB drives are dirt cheap, I picked up a Corsair Voyager 8Gb (non-GT) for AUD$39.

Why would you want to do that?  You wont need to use CD-RWs, delete and put another ISO and whats more, it will install in no time. With the VoyagerI got the core OS installed in 5 minutes – after selecting the iinet local software sources mirror. Funky?

Hypervisors

I got into the Virtualisation game early, VMWare 2.0 (2000-2001) is where it all began after seeing a close friend use it. Unfortunately I had to almost give up my kidney to afford to buy it. Then a brief time  I moved to Connectix VirtualPC when VMWare 4.0 arrived and messed up my networking stack, but went back to VMWare 3.0 for a little while. Then eventually moved back to VirtualPC 2004 after Microsoft acquired Connectix (it was free from the MSDN Subby) and back again on VMWare with version 5.

Fast forward to 2009, we have some ubber quality hypervisors. VMWare still has the behmoth marketshare but a little birdie got some extra power from the Sun and impressed everyone lately with its well roasted features. But the critical decision was which hypervisor to use, we have VMWare Server (1.0 or the 2.0 with its web interface – errr!), XenServer (which is now owned by Citrix) or VirtualBox.

After playing around with VMWare Server 1.0 last year I was left wanting more, so naturally I moved to VMWare Server 2.0 not knowing that the familiar client interface is GAWN, instead in its place is a web based implementation – VI Web Access.  It was slow and clunky and took a while to get used to – but the fact that it showed the status via the web was funky, but runnig an entire VM Session via a browser plugin (which hosed every so often) was far from impressive :(

It finally boiled down to deciding to go with VMWare Server 1.0 (released mid-2006), leaning onto XenServer (seems to include a bit of a learning curve) or to move to a brighter pasture with Sun VirtualBox – which is what I use on my development boxes. I’m still playing around with all three to see how they fair. I am a little biased towards VirtualBox (  I reckons its awesome ja! )  but as this is a long-term build I can’t knock out VMWare Server out just yet nor go the full para-virtualisation with XenServer which is probably what I’ll end-up doing.

I’ve only got a few days before the final release of Ubuntu 9.04 arrives and all this research prior is to make sure things go smoothly next weekend.

{lang: 'en-GB'}
Share

AMD Releases Catalyst 9.4 WHQL

April 12th, 2009 1 comment

Quick note before I head to bed, AMD has released a new Catalyst Driver package  for April which are WHQL certified. Amongst the usual fixes, it also comes with a new ATI Overdrive utility which has been enhanced to pick the optimal OC for your box.

Download pages:

View the release notes for more information.

{lang: 'en-GB'}
Share

Sun ushers in VirtualBox 2.1 with cool new features!

December 18th, 2008 2 comments

VirtualBoxIt only feels like last month Sun released VirtualBox 2.0 and they’ve just released 2.1 which brings a plethora of additional goodies… from the changelog:

  • Support for hardware virtualization (VT-x and AMD-V) on Mac OS X hosts
  • Support for 64-bit guests on 32-bit host operating systems (experimental; see user manual, chapter 1.6, 64-bit guests, page 16)
  • Added support for Intel Nehalem virtualization enhancements (EPT and VPID; see user manual, chapter 1.2, Software vs. hardware virtualization (VT-x and AMD-V), page 10))
  • Experimental 3D acceleration via OpenGL (see user manual, chapter 4.8, Hardware 3D acceleration (OpenGL), page 66)
  • Experimental LsiLogic and BusLogic SCSI controllers (see user manual, chapter 5.1, Hard disk controllers: IDE, SATA (AHCI), SCSI, page 70)
  • Full VMDK/VHD support including snapshots (see user manual, chapter 5.2, Disk image ?les (VDI, VMDK, VHD), page 72)
  • New NAT engine with signi?cantly better performance, reliability and ICMP echo (ping) support (bugs #1046, #2438, #2223, #1247)
  • New Host Interface Networking implementations for Windows and Linux hosts with easier setup (replaces TUN/TAP on Linux and manual bridging on Windows)

Some key things to note here, those “cool” people that run OS X can now get hardware virtualisation. Even if you have a 32bit host operating system your able to run 64bit hosts so long as you enable hardware acceleration on the CPU (AMD-V or Intel-VT) as VirtualBox’s Hypervisor requires this to work. A couple of other major additions – tested personally, include the enhanced virtualisation on the new Nahalem processors (Extended Page Table & Virtual Processor Identifier – see below) and the starting block for OpenGL (and later DirectX) Acceleration in XP and Vista. Testing this on OpenGL gave some decent performance though its still got a bit of work to do.

The move to include 3D acceleration is an interesting one, considering VMWare recently acquired Tungsten Graphics – who is the company behind Mesa, TTM memory manager and Gallium3D.  Interesting times ahead – as always :)

What’s an Extendable Page Table & that VPID thing???

Virtualisation in the Intel world comes in two flavours, the Intel VT-x and Intel VT-i Architectures. The VT-x is for IA-32 processors, whilst the VT-i is for Itanium processors.

Intel took a slice of the Virtualisation pie offered by AMD’s Pacifier architecture in implementing a method of translating ordinary IA-32 page tables from the guest-physical addresses to the host-physical addresses used to access memory. This way, guest’s can handle their own page tables directly and page-faults associated with them directly and minimize the (sizable) overhead associated with translating. This is known as Extended Page Tables (EPT).

Virtual Processor Identifiers (VPIDs) on the other hand allows a hypervisor (or a VMM) to assign a non-zero VPID to each virtual processor with the initial processor (VPID = 0) assigned to the hypervisor itself. This way, the CPU can use the VPIDs to tag translations in the Translation Lookaside Buffer (TLB) which removes the performance penalties associated with flushing TLBs on VM Entry and exit.

Both these two bits of technology (along with NMI-window exiting)  come on the Nahelem processor‘s Virtualisation enhancments. If your interested in a more indepth explanation see the article Solving Virtualisation Challenges with VT-X and VT-I from the Intel Technology Journal.

Other Changes in 2.1

  • VMM: signi?cant performance improvements for VT-x (real mode execution)
  • VMM: support for hardware breakpoints (VT-x and AMD-V only; bug #477)
  • VMM: VGA performance improvements for VT-x and AMD-V
  • VMM: Solaris and OpenSolaris guest performance improvements for AMD-V (Barcelona family CPUs only)
  • VMM: ?xed guru meditation while running the Dr. Web virus scanner (software virtualization only; bug #1439)
  • VMM: deactivate VT-x and AMD-V when the host machine goes into suspend mode; reactivate when the host machine resumes (Windows, Mac OS X & Linux hosts; bug #1660)
  • VMM: ?xed guest hangs when restoring VT-x or AMD-V saved states/snapshots
  • VMM: ?xed guru meditation when executing a one byte debug instruction (VT-x only; bug #2617)
  • VMM: ?xed guru meditation for PAE guests on non-PAE hosts (VT-x)
  • VMM: disallow mixing of software and hardware virtualization execution in general (bug #2404)
  • VMM: ?xed black screen when booting OS/2 1.x (AMD-V only)
  • GUI: pause running VMs when the host machine goes into suspend mode (Windows & Mac OS X hosts)
  • GUI: resume previously paused VMs when the host machine resumes after suspend (Windows & Mac OS X hosts)
  • GUI: save the state of running or paused VMs when the host machine’s battery reaches critical level (Windows hosts)
  • GUI: properly restore the position of the selector window when running on the compiz window manager
  • GUI: properly restore the VM in seamless mode (2.0 regression)
  • GUI: warn user about non optimal memory settings
  • GUI: structure operating system list according to family and version for improved usability
  • GUI: prede?ned settings for QNX guests
  • IDE: improved ATAPI passthrough support
  • Networking: added support for up to 8 Ethernet adapters per VM
  • Networking: ?xed issue where a VM could lose connectivity after a reboot
  • iSCSI: allow snapshot/diff creation using local VDI ?le
  • iSCSI: improved interoperability with iSCSI targets
  • Graphics: ?xed handling of a guest video memory which is not a power of two (bug #2724)
  • VBoxManage: ?xed bug which prevented setting up the serial port for direct device access.
  • VBoxManage: added support for VMDK and VHD image creation
  • VBoxManage: added support for image conversion (VDI/VMDK/VHD/RAW)
  • Solaris hosts: added IPv6 support between host and guest when using host interface networking
  • Mac OS X hosts: added ACPI host power status reporting
  • API: redesigned storage model with better generalization
  • API: allow attaching a hard disk to more than one VM at a time
  • API: added methods to return network con?guration information of the host system
  • Shared Folders: performance and stability ?xes for Windows guests (Microsoft Of?ce Applications)

Performance & Updates

Overall, on the two different machines that I’ve tried the new 2.1 release on, they’ve both “felt” snappier (QX6850 and a Core i7 965Earchitecture summary) but unlike the 1.6 release – which was somewhat flakey for me, 2.x releases of VirtualBox are solid.

3D Acceleration Option

Dont take my word for it, download and try it out.

Gets me a VirtualBox 2.1

Grab your copy and try it out.

  • VirtualBox 2.1.0 for Windows hosts x86 | AMD64
  • VirtualBox 2.1.0 for Solaris and OpenSolaris hosts x86 | AMD64

Give it a shot, heck try OpenSolaris 2008.11 on there just for kicks!

{lang: 'en-GB'}
Share