Re: Allocation strategy - dynamic zone for small files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>Does anyone have any estimates of how much space is wasted by these
>files without making them a special case?  It seems to me that most
>people have huge disks and don't really care about losing a few KB here
>and there (especially if it makes more common cases slower).

Two thoughts:

1) It's not just disk capacity.  Using a 4K disk block for 16 bytes of 
data also wastes the time it takes to drag that 4K from disk to memory and 
cache space.

2) Making more efficient storage and access of _existing_ sets of files 
isn't usually the justification for this technology.  It's enabling new 
kinds of file sets.  Imagine all the 16 byte files that never got created 
because the designer didn't want to waste 4K on each.  A file with  a 
million 16 byte pieces might work better with a million separate files, 
but was made a single file because 64 GB of storage for 16 MB of data was 
not practical.  Similarly, there are files that would work better with 1 
MB blocks, but have 4K blocks anyway, because the designer couldn't afford 
1 MB for every 16 byte file.

--
Bryan Henderson                     IBM Almaden Research Center
San Jose CA                         Filesystems


-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux