On Mon, Aug 24, 2020 at 02:34:04PM -0700, Sagi Grimberg wrote: > > > > I'd think it'd be an improvement, yes. > > > > Please see the reason why it is put back of hctx in > > 073196787727("blk-mq: Reduce blk_mq_hw_ctx size"). > > I know why it is there, just was saying that having an additional > pointer is fine. But the discussion is moot. > > > > > .q_usage_counter should have been put in the 1st cacheline of > > > > request queue. If it is moved to the 1st cacheline of request queue, > > > > we shouldn't put 'dispatch_counter' there, because it may hurt other > > > > non-blocking drivers. > > > > > > q_usage_counter currently there, and the two will always be taken > > > together, and there are several other stuff that we can remove from > > > that cacheline without hurting performance for anything. > > > > > > And when q_usage_counter is moved to the first cacheline, then I'd > > > expect that the dispatch_counter also moves to the front (maybe not > > > the first if it is on the expense of other hot members, but definitely > > > it should be treated as a hot member). > > > > > > Anyways, I think that for now we should place them together. > > > > Then it may hurt non-blocking. > > > > Each hctx has only one run-work, if the hctx is blocked, no other request > > may be queued to hctx any more. That is basically sync run queue, so I > > am not sure good enough perf can be expected on blocking. > > I don't think that you should assume that a blocking driver will block > normally, it will only rarely block (very rarely). If nvme-tcp only blocks rarely, just wondering why not switch to non-blocking which can be done simply with one driver specific wq work? Then nvme-tcp can be aligned with other nvme drivers. > > > So it may not be worth of putting the added .dispatch_counter together > > with .q_usage_counter. > > I happen to think it would. Not sure why you resist so much given how > request_queue is arranged currently. The reason is same with 073196787727("blk-mq: Reduce blk_mq_hw_ctx size"). non-blocking is the preferred style for blk-mq driver, so we can just focus on non-blocking wrt. performance improvement as I mentioned blocking has big problem of sync run queue. It may be contradictory for improving both, for example, if the added .dispatch_counter is put with .q_usage_cunter together, it will be fetched to L1 unnecessarily which is definitely not good for non-blocking. > > > > > > Also maybe a better name is needed here since it's just > > > > > for blocking hctxs. > > > > > > > > > > > + wait_queue_head_t mq_quiesce_wq; > > > > > > + > > > > > > struct dentry *debugfs_dir; > > > > > > #ifdef CONFIG_BLK_DEBUG_FS > > > > > > > > > > > > > > > > What I think is needed here is at a minimum test quiesce/unquiesce loops > > > > > during I/O. code auditing is not enough, there may be driver assumptions > > > > > broken with this change (although I hope there shouldn't be). > > > > > > > > We have elevator switch / updating nr_request stress test, and both relies > > > > on quiesce/unquiesce, and I did run such test with this patch. > > > > > > You have a blktest for this? If not, I strongly suggest that one is > > > added to validate the change also moving forward. > > > > There are lots of blktest tests doing that, such as block/005, > > block/016, block/021, ... > > Good, but I'd also won't want to get this without making sure the async > quiesce works well on large number of namespaces (the reason why this > is proposed in the first place). Not sure who is planning to do that... That can be added when async quiesce is done. thanks, Ming