Re: extremely slow writes to degraded array

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> The same issue is still there. Short of a kernel bug, maybe
> some md settings are less than optimal.

There is no MD RAID setting to increase the IOPS-per-TB of the
storage system or to make the free list of the filesystem on top
of it less fragmented.

> I see some postings saying dirty limit should actually be lowered?

That is generally a good idea, it does not necessarily help with
the two optimizations mentioned above.
https://www.sabi.co.uk/blog/14-two.html?141010#141010

>> While the fs is at about 83% full (10TB free out of 60TB) I
>> had the array almost 100% full in the past (it was then a
>> 20TB array) and did not notice such a drastic slowdown.

The high CPU time depends on how fragmented the filesystem free
list has become over time. Bringing its usage to 100% and then
deleting a lot of files (probably many small ones) had some long
term effects on the files allocated in the newly freed areas.

>> [...] %util is not that bad, though the array is
>> significantly higher than the members, and there is still
>> much reading while writing. [...]

A minor optimization here is having a wide parity stripe to
which read-modify-write then has to happen, which optimizes
storage wait times even more for any IO smaller than the stripe.
https://www.sabi.co.uk/blog/12-thr.html?120414#120414
https://www.sabi.co.uk/blog/12-two.html?120218#120218



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux