Re: awful request merge results while simulating high IOPS multipath

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Feb 25 2015 at  1:17pm -0500,
Busch, Keith <keith.busch@xxxxxxxxx> wrote:

> Sorry, my reply was non sequitur to this thread. We don't do merging
> in NVMe.

NVMe may not but current dm-multipath's top-level queue will.  And any
future blk-mq enabled dm-multipath (which I'm starting to look into now)
will need to also.

> Our first bottleneck appears to be the device mapper's single lock
> request queue.

Obviously if we switched dm-multipath over to blk-mq we'd eliminate
that.  I'll see how things go and will share any changes I come up
with.

FYI, here is a related exchange Jens and I had on the LSF-only mailing
list:

On Tue, Feb 24 2015 at  1:43pm -0500,
Jens Axboe <axboe@xxxxxxxxx> wrote:

> On 02/24/2015 10:37 AM, Mike Snitzer wrote:
>
> >I agree.  I'd hate to be called up front to tap dance around some yet to
> >be analyzed issue.  But discussing the best way to update multipath for
> >blk-mq devices is fair game.
> >
> >As is, the current blk-mq code doesn't have any IO scheduler so the
> >overall approach that DM multipath _attempts_ to take (namely leaning on
> >the elevator to create larger requests that are then balanced across the
> >underlying paths) is a non-starter.
> 
> No it isn't, blk-mq still provides merging, the logic would very
> much be the same there... I think the crux of the problem is the way
> too frequent queue runs, that'd similarly be a problem on the blk-mq
> front.
> 
> -- 
> Jens Axboe
> 

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel




[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux