On 5/11/20 10:53 PM, Giuseppe Bilotta wrote:
Hello Piergiorgio,
On Mon, May 11, 2020 at 6:15 PM Piergiorgio Sartor
<piergiorgio.sartor@xxxxxxxx> wrote:
Hi again!
I made a quick test.
I disabled the lock / unlock in raid6check.
With lock / unlock, I get around 1.2MB/sec
per device component, with ~13% CPU load.
Wihtout lock / unlock, I get around 15.5MB/sec
per device component, with ~30% CPU load.
So, it seems the lock / unlock mechanism is
quite expensive.
I'm not sure what's the best solution, since
we still need to avoid race conditions.
Any suggestion is welcome!
Would it be possible/effective to lock multiple stripes at once? Lock,
say, 8 or 16 stripes, process them, unlock. I'm not familiar with the
internals, but if locking is O(1) on the number of stripes (at least
if they are consecutive), this would help reduce (potentially by a
factor of 8 or 16) the costs of the locks/unlocks at the expense of
longer locks and their influence on external I/O.
Hmm, maybe something like.
check_stripes
-> mddev_suspend
while (whole_stripe_num--) {
check each stripe
}
-> mddev_resume
Then just need to call suspend/resume once.
Thanks,
Guoqing