Re: [RFC PATCH] dm: fix excessive dm-mq context switching

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Hi Mike,

So I gave your patches a go (dm-4.6) but I still don't see the
improvement you reported (while I do see a minor improvement).

null_blk queue_mode=2 submit_queues=24
dm_mod blk_mq_nr_hw_queues=24 blk_mq_queue_depth=4096 use_blk_mq=Y

I see 620K IOPs on dm_mq vs. 1750K IOPs on raw nullb0.

blk_mq_nr_hw_queues=24 isn't likely to help you (but with these patches,
the first being the most important, it shouldn't hurt either provided
you have 24 cpus).

I tried with less but as you said, it didn't have an impact...

Could be you have multiple NUMA nodes and are seeing problems from that?

I am running on a dual socket server, this can most likely be the
culprit...

I have 12 cpus (in the same physical cpu) and only a single NUMA node.
I get the same results as blk_mq_nr_hw_queues=12 with
blk_mq_nr_hw_queues=4 (same goes for null_blk submit_queues).
I've seen my IOPs go from ~950K to ~1400K.  The peak null_blk can get on
my setup is ~1950K.  So I'm still seeing a ~25% drop with dm-mq (but
that is much better than the over 50% drop I saw seeing).

That's what I was planning on :(

Is there something I'm missing?

Not sure, I just emailed out all my patches (and cc'd you).  Please
verify you're using the latest here (same as 'dm-4.6' branch):
https://git.kernel.org/cgit/linux/kernel/git/device-mapper/linux-dm.git/log/?h=for-next

I rebased a couple times... so please diff what you have tested against
this latest 'dm-4.6' branch.

I am. I'll try to instrument what's going on...

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel



[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux