Re: [OT] block allocation algorithm [Was] Re: heavily fragmented file system.. How to defrag it on-line??

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mar 09, 2004  15:46 +0800, Isaac Claymore wrote:
> I've got a workload that several clients tend to write to separate files
> under a same dir simultaneously, resulting in heavily fragmented files.
> And, even worse, those files are rarely read simultaneously, thus read
> performance degrades quite alot.
> 
> I'm wondering whether there's any feature that helps alleviating
> fragmentation in such workloads. Does writing to different dirs(of a same
> filesystem) help?

Very much yes.  Files allocated from different directories will get blocks
from different parts of the filesystem (if available), so they should be
less fragmented.  In 2.6 there is a heuristic that files opened by different
processes allocate from different parts of a group, even within the same
directory, but that only really helps if the files themselves aren't too
large (i.e. under 8MB or so).

Cheers, Andreas
--
Andreas Dilger
http://sourceforge.net/projects/ext2resize/
http://www-mddsp.enel.ucalgary.ca/People/adilger/


_______________________________________________

Ext3-users@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/ext3-users

[Index of Archives]         [Linux RAID]     [Kernel Development]     [Red Hat Install]     [Video 4 Linux]     [Postgresql]     [Fedora]     [Gimp]     [Yosemite News]

  Powered by Linux