On 11/18/2019 4:05 PM, Sagi Grimberg wrote:
This is a much simpler fix that does not create this churn local to every driver. Also, I don't like the assumptions about tag reservations that the drivers is taking locally (that the connect will have tag 0 for example). All this makes this look like a hack.
Agree with Sagi on this last statement. When I reviewed the patch, it was very non-intuitive. Why dependency on tag 0, why a queue number squirrelled away on this one request only. Why change the initialization (queue pointer) on this one specific request from its hctx and so on. For someone without the history, ugly.
I'm starting to think we maybe need to get the connect out of the block layer execution if its such a big problem... Its a real shame if that is the case...
Yep. This is starting to be another case of perhaps I should be changing nvme-fc's blk-mq hctx to nvme queue relationship in a different manner. I'm having a very hard time with all the queue resources today's policy is wasting on targets.
-- james