Re: [RFC PATCH] MD: fix lock contention for flush bios

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 01/26/2018 10:07 AM, Ming Lei wrote:
Hi Guoqing,

On Fri, Jan 26, 2018 at 09:10:11AM +0800, Guoqing Jiang wrote:

On 01/24/2018 09:41 PM, Xiao Ni wrote:
----- Original Message -----
From: "Guoqing Jiang"<gqjiang@xxxxxxxx>
To: "Xiao Ni"<xni@xxxxxxxxxx>,linux-raid@xxxxxxxxxxxxxxx
Cc:shli@xxxxxxxxxx,neilb@xxxxxxxx, "ming lei"<ming.lei@xxxxxxxxxx>,ncroxon@xxxxxxxxxx
Sent: Wednesday, January 24, 2018 5:02:57 PM
Subject: Re: [RFC PATCH] MD: fix lock contention for flush bios



On 01/24/2018 10:43 AM, Xiao Ni wrote:
There is a lock contention when there are many processes which send flush
bios
to md device. eg. Create many lvs on one raid device and mkfs.xfs on each
lv.

Now it just can handle flush request sequentially. It needs to wait
mddev->flush_bio
to be NULL, otherwise get mddev->lock.
With the new approach, can we still keep the synchronization across all
devices?
I found the previous commit a2826aa92e2e ("md: support barrier requests
on all
personalities") did want to keep synchronization.
When one flush bio is sumbitted to md, it creates one bio for each rdev to send flush request.
If it must waits for all the flush requests to return, my patch breaks the rule.

Process A submits a flush bio to md, we call this flush bio bio-a.
Process B submits a flush bio to md, we call this flush bio bio-b.

Before my patch there is only one process can handle flush bio. Process B waits for the returning
of bio-b from Process B. After my patch it can handle the flush bios from all processes. Can't
we handle flush bios from different processes at the same time?
Hmm, barrier does need the synchronization though it caused the contention.
Seems this patch only touches code of handling MD flush request, not related
with barrier, could you explain a bit if it is?

Hmm, I didn't notice the barrier had been converted to flush/fua long time ago.
And some comments inside the code need to be updated I think.

For flush/fua, I guess
we may not need it since fs need to ensure that certain requests are
executed in order, but I am not
so sure about it.
 From block layer's view, there isn't order in handling flush request
among different IOs, and it is just a command sent to hardware for flushing
the internal cache in drive.

The current strictly serialized style of handling MD flush request does hurt
performance, and IMO, the idea of the patch should be correct, at least from
the view of handling flush request.

Thanks for the explanation, no more concern from me.

Regards,
Guoqing
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux