Re: dm-multipath low performance with blk-mq

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 26, 2016 at 03:01:05PM +0100, Hannes Reinecke wrote:
> On 01/26/2016 02:29 PM, Mike Snitzer wrote:
> > On Mon, Jan 25 2016 at  6:37pm -0500,
> > Benjamin Marzinski <bmarzins@xxxxxxxxxx> wrote:
> > 
> >> On Mon, Jan 25, 2016 at 04:40:16PM -0500, Mike Snitzer wrote:
> >>> On Tue, Jan 19 2016 at  5:45P -0500,
> >>> Mike Snitzer <snitzer@xxxxxxxxxx> wrote:
> >>
> >> I don't think this is going to help __multipath_map() without some
> >> configuration changes.  Now that we're running on already merged
> >> requests instead of bios, the m->repeat_count is almost always set to 1,
> >> so we call the path_selector every time, which means that we'll always
> >> need the write lock. Bumping up the number of IOs we send before calling
> >> the path selector again will give this patch a change to do some good
> >> here.
> >>
> >> To do that you need to set:
> >>
> >> 	rr_min_io_rq <something_bigger_than_one>
> >>
> >> in the defaults section of /etc/multipath.conf and then reload the
> >> multipathd service.
> >>
> >> The patch should hopefully help in multipath_busy() regardless of the
> >> the rr_min_io_rq setting.
> > 
> > This patch, while generic, is meant to help the blk-mq case.  A blk-mq
> > request_queue doesn't have an elevator so the requests will not have
> > seen merging.
> > 
> > But yes, implied in the patch is the requirement to increase
> > m->repeat_count via multipathd's rr_min_io_rq (I'll backfill a proper
> > header once it is tested).
> > 
> But that would defeat load balancing, would it not?
> IE when you want to do load balancing you would constantly change
> paths, thereby always taking the write lock.
> Which would render the patch pointless.

But putting in a large rr_min_io_rq value will allow us to validate that
the patch does help things, and there's not another bottleneck hidden
right behind the spinlock.

> I was rather wondering if we could expose all active paths as
> hardware contexts and let blk-mq do the I/O mapping.
> That way we would only have to take the write lock if we have to
> choose a new pgpath/priority group ie in the case the active
> priority group goes down.
> 
> Cheers,
> 
> Hannes
> -- 
> Dr. Hannes Reinecke		   Teamlead Storage & Networking
> hare@xxxxxxx			               +49 911 74053 688
> SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
> GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
> HRB 21284 (AG Nürnberg)
> 
> --
> dm-devel mailing list
> dm-devel@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/dm-devel

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel




[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux