Why you should never celebrate too early.
Damn! That is whack!
Ah the days of dialup porn, and way before my time, the days of ASCII porn on BBS. But just when you think the entire world is turned on by broadband (in the case of Denmark, The Netherlands, Japan & South Korea et al having ridiculous interweb speed) turns out there are still some trying to get their fix on dialup. Indeed, the internet is for porn.
There are somethings money can buy, for everything else, there’s Youtube.
I miss Japan.
File Systems are a hairy topic, on Windows you should be using NTFS (the days of FAT are long gone!) but on Linux, BSD and *Solaris we still have a wide variety to pick and choose depending on our needs. I’ve always been a JFS and XFS fan (previously ReiserFS) until Btrfs goes mainstream (which is one thing to hangout for in Linux Kernel 2.6.29!) and often I’d have a mixture of all three. Our main server at home – affectionately dubbed Zeus, after our lovable Australian Customs puppy Zeus, uses XFS, JFS and Ext3.
JFS to manage the home directories and core file system, ReiserFS for the temp folder and XFS for the heavy file shares – which span multiple terrabytes of files over a LVM (with each file being 1-2Gb in size). The reasoning behind opting for XFS over another file system for the file server was that XFS performs incredibly well under heavy load and scales well when you know the files are big (over 500Mb). Overall I’ve always felt that XFS does provide consistent performance and scalabilty in comparison to the others – but you may think otherwise.
Unfortunately, XFS – whilst quite an excellent file system for managing large files, it seems, suffers from fragmentation over time (especially true if you use your file system for DVR – eg, a Myth backend host) or if the disk gets close to filling up. Luckily there are two utilities that XFS has to manage this fragmentation.
xfs_db– XFS Debug Information
Used to examine an XFS filesystem for problems or gather information about the XFS file system.
xfs_fsr– File System Organiser
Improves the organisation of mounted file systems. The reorganisation algorithm operates on one file at a time, compacting or otherwise improving the layout of the file extents (contiguous blocks of file data).
In Debian/Ubuntu (and derivatives) these two utilities are found in the package
xfsdump. Using these two utilities we can workout the health of the file system (
xfs_db) and hopefully tune/optimise it (
xfs_fsr). I took the plunge last night and optimised Zeus’s main file storage partition:
Filesystem Size Used Avail Use% Mounted on /dev/sdf7 40G 3.5G 37G 9% / varrun 1014M 4.5M 1010M 1% /var/run varlock 1014M 8.0K 1014M 1% /var/lock udev 1014M 112K 1014M 1% /dev devshm 1014M 0 1014M 0% /dev/shm lrm 1014M 34M 980M 4% /lib/modules/2.6.22-15-generic/volatile /dev/sdf6 1023M 38M 986M 4% /boot /dev/sdf10 235G 173G 63G 74% /home /dev/sdf9 10G 544K 10G 1% /opt /dev/sdf8 10G 2.7G 7.4G 27% /var /dev/mapper/Storage 2.3T 1.9T 408G 83% /media/LVM/Storage /dev/sde1 466G 396G 71G 85% /media/Backups
As you can see, the LVM “Storage” mount has just under 20% free space and the non-LVM partition for “Backups” has 15% free space. Both these are XFS volumes, to find the health of the two use the
xfs_db command to gather some information.
$ sudo xfs_db -c frag -r /dev/mapper/Storage $ sudo xfs_db -c frag -r /dev/sde1
Here we’re asking
xfs_db to open the file system in a readonly mode (
-r) passing in a command (
-c) to get the file fragementation data (
frag) for the device (
/dev/*). When we use the
frag command, it returns information only pertaining to the file data in the filesystem as opposed to the fragmentation of freespace (which we can guage with passing the
freesp command). The output of the commands appear below for Zeus.
thushan@ZEUS:~$ sudo xfs_db -c frag -r /dev/sde1 actual 189356, ideal 148090, fragmentation factor 21.79% thushan@ZEUS:~$ sudo xfs_db -c frag -r /dev/mapper/Storage actual 406056, ideal 21584, fragmentation factor 94.68%
Wow! The LVM partition (which spans 4 drives) has around 95% fragementation! Yikes!!! The parition has quite a few Virtual Machine images, various large files (DV Captures etc). The “Backup” (
sde1) on the other hand isnt as badly fragmented.
So right now we’ve found our problem and its time to fix it. First thing to do – and realise that we can fix this on a live running system – is to try and find a time where the partition will be used very little (like overnight) so you let its do its thing without unnecessary burden. Then lets make use of the File System Organiser utility (
xfs_fsr) and ask it to reorganise our parition to the best of its ability.
$ sudo xfs_fsr -t 25200 /dev/mapper/Storage -v $ sudo xfs_fsr -t 25200 /dev/sde1 -v
Now this is much simpler, the
xfs_fsr utility is being told to reorganise
/dev/* with a timeout (
-t) of 7hrs (60 * 60 * 7 =
25200) which is specified in seconds. Because I like to see how much is done I also specified the verbose output option (
-v). Let it do its thing and hopefully when you return you will have the last bit of output showing the extents before, how many after and the inode, something like this:
extents before:5 after:1 DONE ino=4209066103 ino=4209066107 extents before:5 after:1 DONE ino=4209066107 ino=4209066101 extents before:4 after:1 DONE ino=4209066101 ino=4209066091 extents before:3 after:1 DONE ino=4209066091 ino=4209066093 extents before:3 after:1 DONE ino=4209066093 ino=4209066105 extents before:2 after:1 DONE ino=4209066105 ino=4209066143 extents before:27 after:1 DONE ino=4209066143
Now its time to go back and check how well the file system reorganising was:
$ sudo xfs_db -c frag -r /dev/mapper/Storage
And the results?
thushan@ZEUS:~$ sudo xfs_db -c frag -r /dev/mapper/Storage actual 21652, ideal 21584, fragmentation factor 0.31%
Lovely! What a difference and you’ll notice the improvement immediately if you start moving or transfering files around.
Ideally, you may want to setup a cron task to let this process run (maybe with a lower timeout) overnight or when theres low-load. Whats great about the
xfs_fsr utility is that its smart enough to remember where it finished up last time and continue from there. Its a shame Ubuntu doesnt do this already.
A new book titled The Race for a New Game Machine: Creating the Chips Inside the XBox 360 and the Playstation 3 was released on the 1st of Jannuary this year that looks into the development of the Microsoft Xbox 360 and the Sony Playstation 3 which, as it turned out in the end, were both developed by the IBM Corporation.
The authors of the book, David Shippy (who was the man behind the brains of the Cell) and his co-worker, Mickie Phipps goes into the depths of nerdisms to give an insight into the development of The Cell processor. From the Wall Street Journal review:
When the companies entered into their partnership in 2001, Sony, Toshiba and IBM committed themselves to spending $400 million over five years to design the Cell, not counting the millions of dollars it would take to build two production facilities for making the chip itself. IBM provided the bulk of the manpower, with the design team headquartered at its Austin, Texas, offices. Sony and Toshiba sent teams of engineers to Austin to live and work with their partners in an effort to have the Cell ready for the Playstation 3′s target launch, Christmas 2005.
But a funny thing happened along the way: A new “partner” entered the picture. In late 2002, Microsoft approached IBM about making the chip for Microsoft’s rival game console, the (as yet unnamed) Xbox 360. In 2003, IBM’s Adam Bennett showed Microsoft specs for the still-in-development Cell core. Microsoft was interested and contracted with IBM for their own chip, to be built around the core that IBM was still building with Sony.
All three of the original partners had agreed that IBM would eventually sell the Cell to other clients. But it does not seem to have occurred to Sony that IBM would sell key parts of the Cell before it was complete and to Sony’s primary videogame-console competitor. The result was that Sony’s R&D money was spent creating a component for Microsoft to use against it.
And here’s the real kicker.
Mr. Shippy and Ms. Phipps detail the resulting absurdity: IBM employees hiding their work from Sony and Toshiba engineers in the cubicles next to them; the Xbox chip being tested a few floors above the Cell design teams. Mr. Shippy says that he felt “contaminated” as he sat down with the Microsoft engineers, helping them to sketch out their architectural requirements with lessons learned from his earlier work on Playstation.
The deal only got worse for Sony. Both designs were delivered on time to IBM’s manufacturing division, but there was a problem with the first chip run. Microsoft had had the foresight to order backup manufacturing capacity from a third party. Sony did not and had to wait another six weeks to get their first chips. So Microsoft actually got the chip that Sony helped design before Sony did. In the end, Microsoft’s Xbox 360 hit its target launch in November 2005, becoming its own success. Because of various delays, the Playstation 3 was pushed back a full year.
The book (which arrived on Friday!) goes into all the juicy bits that lead up to the delivery of both processors, well worth the $14USD its listed for on Amazon. Whilst I havent finished the entire book yet, thus far its full of twists and corporate musings and tricks with an interesting look at the teams and people that made these two products possible in the end. You’ll be hooked from the first page – I guarantee it.
BD+ is a component of the Blu-ray Disc Digital Rights Management system. It was developed by Cryptography Research Inc. and is based on their Self-Protecting Digital Content concept. BD+ played an important role in the past format war of Blu-ray Disc and HD DVD. Several studios have cited Blu-ray Disc’s adoption of the BD+ anti-copying system as the reason they supported Blu-ray Disc over HD DVD.
One of the more humorous observations was that unlike DVD (which used DeCSS for its copy protection system) and AACS which powered the bulk of the HD-DVDs of the time that BD+ would uphold its protection for atleast the next 10 years. This may have been one of the key factors in the HD-Wars, but alas it seems someone has found a way of traveling into the future and finding the break.
Oopho2ei (who claims is not a professional programmer :O) from the Doom9 forums along with a few others (bmnot, schluppo, Disabled, evdberg) have (it seems) successfully broken the BD+ protection scheme in a grand total of 5 weeks and 3 days (started on the 24th of August). They have restored the BD+ protected “The Day After Tomorrow”:
I am glad to announce the first successful restoration of the BD+ protected movie “The Day After Tomorrow” in linux. It was done using a blue ray drive with patched firmware (to get the volume id), DumpHD to decrypt the contents according to the AACS specification and the BDVM debugger from this thread to generate the conversion table. The conversion table is the key information to successfully repair all the broken parts in m2ts files to restore the original video content. This small tool was finally used to repair the main movie file “00001.m2ts” according to the conversion table.
To verify the correctness i compared my 00001.m2ts with the one AnyDVD-HD creates and they both match. The MD5 hash of this 30GB large file is in both cases “0fa2bc65c25d7087a198a61c693a0a72″.
Breaking the code is no simple feat, Oopho2ei and team has had to reimplement the VM that runs the BD+ protection layer and realises that there’s a fair chance that it could be blocked at a later stage and may phone-home:
There has to be some kind of firewall around the virtual machine which validates all communication between the ( potentially hostile ) content code and the outside world (traps and events). Part of the rules which are enforced by that firewall are the parameter checks on every trap call. It’s obvious that the traps and the event handling itself has to be carefully implemented. I believe this additional effort is necessary to prevent the content code from breaking out of it’s sandboxed environment and do nasty things like gathering user information and “calling home” when it detects an unlicensed emulator. So because these additional security measures make things more difficult i suggested to test this code first with the easy traps.
I’ll just say: due to certain properties of BD+, once you’re past a certain point, you can handle it pretty much without reversing – BD+ itself then helps you out – on any player
Actually you’d have to know how BD+ really works, to know what I meant (and even then you probably wouldn’t ).
But if I start unraveling that, I’d be finding myself looking for a new job by next week
I would like to stress again that this project wasn’t intended to circumvent copy protection and promote piracy. This can already be done using commercial software like AnyDVD-HD. Instead this project was an attempt to enable users of open source operating systems (like linux) to playback their BD+ protected discs without having to use proprietary software. Furthermore only two movies “I Robot” and “The Day After Tomorrow” have been proven to be handled correctly so far. Obviously there is still a lot of debugging to be done.
enuff chit-chat, go download and install.
NOTE: The server is no doubt being hammered right now, so be patient.
Just a sample of the subtitles
You need a bun to bite Benny Lava…
Have you been high toooday?
I see the nuns are gay!
My brother yelled to me…
I love you inside Ed…
My loony bun is fine Benny Lava!
Minor bun engine made Benny Lava!
I told a highschool girl…
I love you inside me…