On Tue, Nov 19, 2019 at 09:56:45AM -0800, James Smart wrote: > On 11/18/2019 4:05 PM, Sagi Grimberg wrote: > > > > This is a much simpler fix that does not create this churn local to > > every driver. Also, I don't like the assumptions about tag reservations > > that the drivers is taking locally (that the connect will have tag 0 > > for example). All this makes this look like a hack. > > Agree with Sagi on this last statement. When I reviewed the patch, it was > very non-intuitive. Why dependency on tag 0, why a queue number squirrelled > away on this one request only. Why change the initialization (queue pointer) > on this one specific request from its hctx and so on. For someone without > the history, ugly. > > > > > I'm starting to think we maybe need to get the connect out of the block > > layer execution if its such a big problem... Its a real shame if that is > > the case... > > Yep. This is starting to be another case of perhaps I should be changing > nvme-fc's blk-mq hctx to nvme queue relationship in a different manner. I'm > having a very hard time with all the queue resources today's policy is > wasting on targets. Wrt. the above two points, I believe both are not an issue at all by this driver specific approach, see my comment: https://lore.kernel.org/linux-block/fda43a50-a484-dde7-84a1-94ccf9346bdd@xxxxxxxxxxxx/T/#mb72afa6ed93bc852ca266779977634cf6214b329 Thanks, Ming