On Tue, 2009-02-24 at 11:23 -0800, Adam Williamson wrote: > On Tue, 2009-02-24 at 13:47 -0500, Dan Williams wrote: > > > As has been pointed out a couple of times, it's *not* always efficient > > to use the filesystem, because the filesystem usually relies on block > > sizes. Thus you waste lots of space if your file is < 4k. > > Has anyone quantified this at any point? > > If 'lots of space' turns out to be 150MB, who the frick cares? I bought > a 500GB hard disk for $60. It was the smallest one they sell any more. I > bought an 8GB micro SD card - the size of a fricking fingernail - for > $50 a couple of months back. Where are we actually dealing with space > limitations any more? Not saying that there isn't somewhere, just that > we need to quantify if block sizes are actually a practical problem for > the use cases we're considering or not. It's easy enough to run some numbers. Lets pull a big number out of a hat, 10000 configuration keys. Lets assume a block size of 4k, and that all key values are 4k or less. 10000 * 4k = 40000k, or ~39M. Or working backwards, 150M = 153600k / 4 = 38400 keys. Anyway, it was just an idea. I don't expect it to be suitable for *every* purpose. It just annoys me that people so readily discount the filesystem, only to just re-invent what is essentially a filesystem in userspace *anyway*. And re-invent it badly. How many thousands of man-years have gone in to researching filesystem design over the past, what, 40 years? How many thousands of man years has gone in to optimizing the Linux kernel and ext2/3/4 specifically? Do you really think you can do better?
Attachment:
signature.asc
Description: This is a digitally signed message part
-- fedora-devel-list mailing list fedora-devel-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/fedora-devel-list