Re: [OT] block allocation algorithm [Was] Re: heavily fragmented file system.. How to defrag it on-line??

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Isaac Claymore <clay@xxxxxxxxxxxxx> wrote:
>
> I've got a workload that several clients tend to write to separate files
> under a same dir simultaneously, resulting in heavily fragmented files.
> And, even worse, those files are rarely read simultaneously, thus read
> performance degrades quite alot.

We really, really suck at this.  I have a little hack here which provides
an ioctl with which you can instantiate blocks outside the end-of-file, so
each time you've written 128M you go into the filesystem and say "reserve me
another 128M".  This causes the 128M chunks to be laid out very nicely
indeed.

It is, however, wildly insecure - it's trivial to use this to read
uninitialised disk blocks.  But we happen to not care about that.

It is, however, a potential way forward to fix this problem.  Do the growth
automatically somehow, fix the security problem, stick the inodes on the
orphan list so that they get trimmed back to the correct size during
recovery, and there we have it.

We're a bit short on bodies to do it at present though.

One thing you could do, which _may_ suit is to write the files beforehand
and change your app to perform overwrite.

Or just change your app to buffer more data: write 16MB at a time.


_______________________________________________

Ext3-users@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/ext3-users

[Index of Archives]         [Linux RAID]     [Kernel Development]     [Red Hat Install]     [Video 4 Linux]     [Postgresql]     [Fedora]     [Gimp]     [Yosemite News]

  Powered by Linux