On 03/13/2010 08:17 AM, Ric Wheeler wrote: > On 03/13/2010 12:45 AM, Felix Miata wrote: > >> On 2010/03/10 21:28 (GMT-0500) Ric Wheeler composed: >> >> >> >>> For anyone serious about storage (performance, reliability and power >>> consumption) this will be a positive step. >>> >>> >> Not everyone. Users of larger numbers of small files and small numbers of >> large files already lose a heap of space to slack even with 1024k blocksize, >> which will at least quadruple if forced to 4k sectors. >> http://en.wikipedia.org/wiki/Internal_fragmentation#Internal_fragmentation >> >> > Second on my list of annoying replies is a pointer to wikipedia (trumped > only by replies with random URLS !). > > If you really want to store lots of really tiny files (< 1KB), you > probably want to look at wants to store them in more efficient ways (tar > them up, use a light weight DB, etc). Having been in the business of > making storage appliances that stored lots of small files, it is a > challenge. > > Also note that the overhead of creating a file/directory entry/inode in > most modern file systems can easily consume more than a tiny file. If > you want to test this, just take your favourite file system and make a > brand new, empty FS. Fill it with zero length files and then see what > your per file overhead is. > > In any case, you could use a file system (like reiserfs) that does tail > packing. > > Ric > > > Another thing to keep in mind is that most file systems will hit severe performance issues with file count before you end up filling the disk. Assume that you take a 2TB disk and try to pack it with 1KB files - you need to be able to store, sort and fsck a file system with close to 2 billion files. How do you index them? How many files/directory? How deep is your directory tree? For most any user, you would be lucky to get up to the point of utilization where fragmentation would start to be a concern... ric -- devel mailing list devel@xxxxxxxxxxxxxxxxxxxxxxx https://admin.fedoraproject.org/mailman/listinfo/devel