On Mon, Nov 13, 2006 at 09:46:01AM -0800, Bryan Henderson wrote: > >Does anyone have any estimates of how much space is wasted by these > >files without making them a special case? It seems to me that most > >people have huge disks and don't really care about losing a few KB here > >and there (especially if it makes more common cases slower). > > Two thoughts: > > 1) It's not just disk capacity. Using a 4K disk block for 16 bytes of > data also wastes the time it takes to drag that 4K from disk to memory and > cache space. Good point. But wouldn't the page cache suffer regardless? (You can't split up pages between files, AFAIK.) > 2) Making more efficient storage and access of _existing_ sets of files > isn't usually the justification for this technology. It's enabling new > kinds of file sets. Imagine all the 16 byte files that never got created > because the designer didn't want to waste 4K on each. A file with a > million 16 byte pieces might work better with a million separate files, > but was made a single file because 64 GB of storage for 16 MB of data was > not practical. Similarly, there are files that would work better with 1 > MB blocks, but have 4K blocks anyway, because the designer couldn't afford > 1 MB for every 16 byte file. I haven't really looked at it, but from what I hear, (Free?)BSD has a nifty feature where it divides up a block into half/quarter during allocation to save some space. As far as I know, it is a fs (probably ufs) feature. Just my 2 cents. Josef "Jeff" Sipek. -- In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say "programmer" or something like that. - Linus Torvalds - To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html