On Tue, Feb 15, 2022 at 07:47:04PM +0100, Christoph Hellwig wrote: > On Tue, Feb 15, 2022 at 07:22:40AM -0800, Keith Busch wrote: > > I can't actually tell if not checking the DYING flag check was > > intentional or not, since the comments in blk_queue_start_drain() say > > otherwise. > > > > Christoph, do you know the intention here? Should __bio_queue_enter() > > check the queue DYING flag, or do you prefer drivers explicity set the > > disk state like this? It looks to me the queue flags should be checked > > since that's already tied to the freeze wait_queue_head_t. > > It was intentional but maybe not fully thought out. Do you remember why > we're doing the manual setting of the dying flag instead of just calling > del_gendisk early on in nvme? Because calling del_gendisk is supposed > to be all that a tree needs to do. When the driver concludes new requests can't ever succeed, we had been setting the queue to DYING first so new requests can't enter, which can prevent forward progress. AFAICT, just calling del_gendisk() is fine for a graceful removal. It calls fsync_bdev() to flush out pending writes before setting the disk state to "DEAD". Setting the queue to dying first will "freeze" the queue, which is why fsync_bdev() is blocked. We were relying on the queue DYING flag to prevent that from blocking. Perhaps another way to do this might be to remove the queue DYING setting, and let the driver internally fail new requests instead? There may be some issues with doing it that way IIRC, but blk-mq is a bit evolved from where we started, so I'd need to test it out to confirm.