Archive

Posts Tagged ‘lvm’

Part I: Rebuilding ZEUS, the journey of training the next home server

October 6th, 2009 No comments

I’ve been looking at upgrading our existing home server from the archaic (and unsupported!) Ubuntu Gutsy (because I was feeling gutsy at the time) to something newer, fresher and that will last me atleast another 2 years. This is purely for my documentation.

Current Setup

Currently running an AMD setup with Ubuntu Gutsy (7.10) – I didn’t think it would last this long, honest! Ubuntu 6.06 had too many issues with the hardware/driver incompatibilities.

DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=7.10
DISTRIB_CODENAME=gutsy
DISTRIB_DESCRIPTION="Ubuntu 7.10"

On an ASUS A8N-SLI Deluxe motherboard (because you know, servers need SLI!) sporting a AMD Athlon64 3200+ (the only AMD CPU at home!) with 2Gb of RAM (hey, DDR1 wasn’t cheap enough!)

lspci

00:00.0 Memory controller: nVidia Corporation CK804 Memory Controller (rev a3)
00:01.0 ISA bridge: nVidia Corporation CK804 ISA Bridge (rev f3)
00:01.1 SMBus: nVidia Corporation CK804 SMBus (rev a2)
00:02.0 USB Controller: nVidia Corporation CK804 USB Controller (rev a2)
00:02.1 USB Controller: nVidia Corporation CK804 USB Controller (rev a3)
00:04.0 Multimedia audio controller: nVidia Corporation CK804 AC'97 Audio Controller (rev a2)
00:06.0 IDE interface: nVidia Corporation CK804 IDE (rev f2)
00:07.0 IDE interface: nVidia Corporation CK804 Serial ATA Controller (rev f3)
00:08.0 IDE interface: nVidia Corporation CK804 Serial ATA Controller (rev f3)
00:09.0 PCI bridge: nVidia Corporation CK804 PCI Bridge (rev f2)
00:0a.0 Bridge: nVidia Corporation CK804 Ethernet Controller (rev f3)
00:0b.0 PCI bridge: nVidia Corporation CK804 PCIE Bridge (rev f3)
00:0c.0 PCI bridge: nVidia Corporation CK804 PCIE Bridge (rev f3)
00:0d.0 PCI bridge: nVidia Corporation CK804 PCIE Bridge (rev f3)
00:0e.0 PCI bridge: nVidia Corporation CK804 PCIE Bridge (rev a3)
00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration
00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map
00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller
00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control
01:00.0 VGA compatible controller: nVidia Corporation G70 [GeForce 7300 GT] (rev a1)
05:06.0 RAID bus controller: Silicon Image, Inc. SiI 3114 [SATALink/SATARaid] Serial ATA Controller (rev 02)
05:07.0 RAID bus controller: Silicon Image, Inc. Adaptec AAR-1210SA SATA HostRAID Controller (rev 02)
05:0a.0 RAID bus controller: Silicon Image, Inc. SiI 3114 [SATALink/SATARaid] Serial ATA Controller (rev 02)
05:0b.0 FireWire (IEEE 1394): Texas Instruments TSB43AB22/A IEEE-1394a-2000 Controller (PHY/Link)
05:0c.0 Ethernet controller: Marvell Technology Group Ltd. 88E8001 Gigabit Ethernet Controller (rev 13)

/proc/cpuinfo

processor       : 0
vendor_id       : AuthenticAMD
cpu family      : 15
model           : 47
model name      : AMD Athlon(tm) 64 Processor 3200+
stepping        : 2
cpu MHz         : 1000.000
cache size      : 512 KB
fdiv_bug        : no
hlt_bug         : no
f00f_bug        : no
coma_bug        : no
fpu             : yes
fpu_exception   : yes
cpuid level     : 1
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt lm 3dnowext 3dnow up pni lahf_lm ts fid vid ttp tm stc
bogomips        : 2011.59
clflush size    : 64

This faithful boxen has been the primary source of our fileserver (XFS+LVM 3Tb) – used internal to our house and also by others who upload their stuff to be backed up. Subversion repositories, Apache/LightHttpd test servers for PHP work, Virtualisation for Windows 2003, 2000 and SqlServers running for testing and several other things (think: TeamCity, Continous Integration tools, Confluence etc). Its also been damn convenient when your at work or on holidays to be able to login, muse about via SSH and even fix things remotely.

Needs & Wants

The new server will need to fufil the following roles:

  • Function as a NAS to continue to offer backup (via users home directories) and storage options
    • No file-system constraints asside from no Ext3 or ReiserFS.
  • Offer the ability to still run Virtual Machines, need to virtualise CentOS, Ubuntu and Windows for testing, they’ll be running in  Bridged mode
  • No real need for a Gui (I can consider myself a little l33t than a few years ago)
  • Run a Subversion repository (not that hard!)

The idea is to have a bare bones operating system install and have the virtual machines handle the hard and ugly work – webservers to test things, servers to try development deployments (java) and other bits and pieces. The core OS just has to manage the NAS and allow the ability to SSH in to offer subversion access.

Hardware

The hardware I’ve picked from things I had around the place, the only thing I’ve bought is just new sticks of RAM.

  • Motherboard: ASUS P5QL-PRO
    This board offered some excellent specifications via the P43 chipset, the things I looked for was the number of SATA ports ‘out of the box’ – 6 native SATA2, the number of 1x PCIe slots (2!) for future addions of PCIe SATA adapters and the maximum amount of memory possible (8Gb). Oh ofcourse, something cheapy and that can run the CPU I had around. A Gigabit NIC was also important (dual would be better!) but if it wasn’t supported I had a trusty Intel PRO 1000MT Server PCI cards to fill the void – almost everything supports them (e1000)!
  • CPU: Intel Core-2 E6750 – 2.66Ghz (65W TDP, VT)
    Importance was Intel-VT support, low TDP and a dualcore thats not too high.
  • RAM: Corsair TWIN2X4096-6400C5 (4Gb kit x 2 = 8Gb)
    Cheapy cheapy, twice the fun of a regular kit, slightly higher CAS, but who CAreS this isnt being overclocked.
  • Graphics: ASUS 9400GT PCI-Express
    The cheapest graphics card to be found at the legendary& award winning computer store MSY Technologies. Depending on how the drivers go (I’m usually biased towards ATI for all Linuxes) I might endup paying for a ATi card later.

Next up the investigation, be warned though I started this initially back in June/July (possibly a bit earlier).

{lang: 'en-GB'}
Share

Mounting and activating LVM Volumes from BootCD to recover data in linux

September 2nd, 2009 3 comments

I’ve been working heavily with Red Hat Enterprise Linux (and subsequently CentOS) the past few months (shh! dont tell my MSFT homey!) and one of the great things about CentOS and RHEL is that they both install using LVM – which is a helluvah lot easier when time passes and you realise your running out of space on a drive.

But today I had to recover some data from an LVM partition and copy over some bits to another partition without actually booting the CentOS install (it was bj0rked by yours truely!). What to do? Throw in a Ubuntu LiveCD (or another) and just mount the partitions 🙂

First thing we need to do is install LVM – remember we need to be sudo for these to work.

$ aptitude install lvm2

Then scan for any available physical volumes on any of the drives.

$ pvscan

Scan for any Volume Groups that may be present.

$ vgscan

Now activate any of the Volume Groups that it finds, running this makes the logical volumes known to the kernel.

$ vgchange –available y

Then let it scan for any Logical Volumes on any drives

$ lvscan

After running the logical volume scan it will show the path to the LVM mount path, for my boxen it gives something like this

ACTIVE            ‘/dev/LVM/Data‘ [5.26 TB] inherit

You simply mount the path specified and browse like normally 🙂

$ mount /dev/LVM/Data /mnt

Enjoy.

{lang: 'en-GB'}
Share

Maintaining your XFS with XFS Filesystem Reorganiser xfs_fsr to defrag

January 25th, 2009 6 comments

File Systems are a hairy topic, on Windows you should be using NTFS (the days of FAT are long gone!) but on Linux, BSD and *Solaris we still have a wide variety to pick and choose depending on our needs. I’ve always been a JFS and XFS fan (previously ReiserFS) until Btrfs goes mainstream (which is one thing to hangout for in Linux Kernel 2.6.29!) and often I’d have a mixture of all three. Our main server at home – affectionately dubbed Zeus, after our lovable Australian Customs puppy Zeus, uses XFS, JFS and Ext3.

JFS to manage the home directories and core file system, ReiserFS for the temp folder and XFS for the heavy file shares – which span multiple terrabytes of files over a LVM (with each file being 1-2Gb in size). The reasoning behind opting for XFS over another file system for the file server was that XFS performs incredibly well under heavy load and scales well when you know the files are big (over 500Mb). Overall I’ve always felt that XFS does provide consistent performance and scalabilty in comparison to the others – but you may think otherwise.

Unfortunately, XFS – whilst quite an excellent file system for managing large files, it seems, suffers from fragmentation over time (especially true if you use your file system for DVR – eg, a Myth backend host) or if the disk gets close to filling up. Luckily there are two utilities that XFS has to manage this fragmentation.

  • xfs_db – XFS Debug Information
    Used to examine an XFS filesystem for problems or gather information about the XFS file system.
  • xfs_fsr – File System Organiser
    Improves the organisation of mounted file systems. The reorganisation algorithm operates on one file at a time, compacting or otherwise improving the layout of the file extents (contiguous blocks of file data).

In Debian/Ubuntu (and derivatives) these two utilities are found in the package xfsdump. Using these two utilities we can workout the health of the file system (xfs_db) and hopefully tune/optimise it (xfs_fsr). I took the plunge last night and optimised Zeus’s main file storage partition:

Filesystem            Size Used Avail Use% Mounted on
/dev/sdf7              40G  3.5G   37G   9% /
varrun               1014M  4.5M 1010M   1% /var/run
varlock              1014M  8.0K 1014M   1% /var/lock
udev                 1014M  112K 1014M   1% /dev
devshm               1014M     0 1014M   0% /dev/shm
lrm                  1014M   34M  980M   4% /lib/modules/2.6.22-15-generic/volatile
/dev/sdf6            1023M   38M  986M   4% /boot
/dev/sdf10            235G  173G   63G  74% /home
/dev/sdf9              10G  544K   10G   1% /opt
/dev/sdf8              10G  2.7G  7.4G  27% /var
/dev/mapper/Storage
                      2.3T  1.9T  408G  83% /media/LVM/Storage
/dev/sde1             466G  396G   71G  85% /media/Backups

As you can see, the LVM “Storage” mount has just under 20% free space and the non-LVM partition for “Backups” has 15% free space. Both these are XFS volumes, to find the health of the two use the xfs_db command to gather some information.

$ sudo  xfs_db -c frag -r /dev/mapper/Storage
$ sudo  xfs_db -c frag -r /dev/sde1

Here we’re asking xfs_db to open the file system in a readonly mode (-r) passing in a command (-c)  to get the file fragementation data (frag) for the device (/dev/*). When we use the frag command, it returns information only pertaining to the file data in the filesystem as opposed to the fragmentation of freespace (which we can guage with passing the freesp command). The output of the commands appear below for Zeus.

thushan@ZEUS:~$ sudo  xfs_db -c frag -r /dev/sde1
actual 189356, ideal 148090, fragmentation factor 21.79%

thushan@ZEUS:~$ sudo  xfs_db -c frag -r /dev/mapper/Storage
actual 406056, ideal 21584, fragmentation factor 94.68%

Wow! The LVM partition (which spans 4 drives) has around 95% fragementation! Yikes!!! The parition has quite a few Virtual Machine images, various large files (DV Captures etc). The “Backup” (sde1) on the other hand isnt as badly fragmented.

So right now we’ve found our problem and its time to fix it. First thing to do – and realise that we can fix this on a live running system – is to try and find a time where the partition will be used very little (like overnight) so you let its do its thing without unnecessary burden. Then lets make use of the File System Organiser utility (xfs_fsr) and ask it to reorganise our parition to the best of its ability.

$ sudo xfs_fsr -t 25200 /dev/mapper/Storage -v
$ sudo xfs_fsr -t 25200 /dev/sde1 -v

Now this is much simpler, the xfs_fsr utility is being told to reorganise /dev/* with a timeout (-t) of 7hrs  (60 * 60 * 7 = 25200) which is specified in seconds. Because I like to see how much is done I also specified the verbose output option (-v). Let it do its thing and hopefully when you return you will have the last bit of output showing the extents before, how many after and the inode, something like this:

extents before:5 after:1 DONE ino=4209066103
ino=4209066107
extents before:5 after:1 DONE ino=4209066107
ino=4209066101
extents before:4 after:1 DONE ino=4209066101
ino=4209066091
extents before:3 after:1 DONE ino=4209066091
ino=4209066093
extents before:3 after:1 DONE ino=4209066093
ino=4209066105
extents before:2 after:1 DONE ino=4209066105
ino=4209066143
extents before:27 after:1 DONE ino=4209066143

Now its time to go back and check how well the file system reorganising was:

$ sudo  xfs_db -c frag -r /dev/mapper/Storage

And the results?

thushan@ZEUS:~$ sudo  xfs_db -c frag -r /dev/mapper/Storage
actual 21652, ideal 21584, fragmentation factor 0.31%

Lovely! What a difference and you’ll notice the improvement immediately if you start moving or transfering files around.

Ideally, you may want to setup a cron task to let this process run (maybe with a lower timeout) overnight or when theres low-load. Whats great about the xfs_fsr utility is that its smart enough to remember where it finished up last time and continue from there. Its a shame Ubuntu doesnt do this already.

{lang: 'en-GB'}
Share