Re: Allocation strategy - dynamic zone for small files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>> 1) It's not just disk capacity.  Using a 4K disk block for 16 bytes of 
>> data also wastes the time it takes to drag that 4K from disk to memory 
and 
>> cache space.
>
>Good point. But wouldn't the page cache suffer regardless? (You can't 
split
>up pages between files, AFAIK.)

Yeah, you're right, if we're talking about granularity finer than the page 
size.  But furthermore, as long as we're just talking about techniques to 
reduce internal fragmentation in the disk allocations, there's no reason 
either the cache usage or the data transfer traffic has to be affected 
(the fact that a whole block is allocated doesn't mean you have to read or 
cache the whole block).

But head movement and rotational latency are worth considering.  If you 
cram 100 files into a track, some access patterns are going to be faster 
than if you have to spread them out across 10 tracks with a lot of empty 
space in between.  That's another reason that I sometimes see people pile 
a bunch of data into a large file and essentially make a filesystem within 
that file.

--
Bryan Henderson                     IBM Almaden Research Center
San Jose CA                         Filesystems



-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux