On Fri, Jun 05, 2020 at 08:24:50AM +0100, John Garry wrote: > On 05/06/2020 01:59, Ming Lei wrote: > > > hctx5: default 36 37 38 39 > > > hctx6: default 40 41 42 43 > > > hctx7: default 44 45 46 47 > > > hctx8: default 48 49 50 51 > > > hctx9: default 52 53 54 55 > > > hctx10: default 56 57 58 59 > > > hctx11: default 60 61 62 63 > > > hctx12: default 0 1 2 3 > > > hctx13: default 4 5 6 7 > > > hctx14: default 8 9 10 11 > > > hctx15: default 12 13 14 15 > > OK, the queue mapping is correct. > > > > As I mentioned in another thread, the real hw tag may be set as wrong. > > > > I doubt this. > > And I think that you should also be able to add the same debug to > blk_mq_hctx_notify_offline() to see that there may be still driver tags > allocated to when all the scheduler tags are free'd for any test in your > env. No, that isn't possible, scheduler tag lifetime covers the whole request's lifetime. > > > You have to double check your cooked tag allocation algorithm and see if it > > can work well when more requests than real hw queue depth are queued to hisi_sas, > > and the correct way is to return SCSI_MLQUEUE_HOST_BUSY from .queuecommand(). > > Yeah, the LLDD would just reject requests in that scenario and we would know > about it from logs etc. > > Anyway, I'll continue to check. The merged patch is much simpler than before, new request is prevented from being allocated on the inactive hctx, then drain all in-flight requests on this hctx. You need to check if the request is queued to hw correctly. Thanks, Ming