Terabytes on a diet
Authors
School of Computer Science and Engineering
UNSW,
Sydney 2052, Australia
Abstract
Terabytes on a Diet
You can buy a multiTerabyte raid array off the shelf nowadays. But it's not much use if you can't plug it into your trusty Linux box...
Although the block layer is in flux, there's still a lot of careless coding that means:
- Even 64 bit platforms are limited to 1 or 2 Tb filesystems (use of 32-bit type to hold sector number; sector size hard-coded to 512 bytes)
- Even where the partitioning scheme allows partitioning of larger discs (e.g., EFI), other limitations prevent them from being used to their full capacity
- Even though the page-cache limit is 16Tb with 4k pages (and indeed if you can create a file this big you can read and write it!) you can't have a filesystem that big.
So...
I set out to remove these limitations on both 64 and 32 bit platforms.
But how do you test support for huge (>2TB) filesystems under Linux when the biggest disc you have is 100G? Simple, write a simulator, and use a sparse file for the disc contents. But... it's not that simple, as I'll explain in my talk.
BibTeX Entry
@inproceedings{Chubb_02b, address = {Melbourne, Australia}, author = {Peter Chubb}, booktitle = {Conference for Unix, Linux and Open Source Professionals (AUUG)}, month = sep, paperurl = {https://trustworthy.systems/publications/papers/Chubb_02b.pdf}, title = {Terabytes on a Diet}, year = {2002} }