-
+ if (!test_and_set_bit(NVME_CTRL_NS_DEAD, &ctrl->flags)) {
+ list_for_each_entry(ns, &ctrl->namespaces, list)
+ nvme_set_queue_dying(ns);
+ }
Looking at it now, I'm not sure I understand the need for this flag. It
seems to make nvme_kill_queues reentrant safe, but the admin queue
unquiesce can still end up unbalanced under reentrance?
How is this not broken today (or ever since quiesce/unquiesce started
accounting)? Maybe I lost some context on the exact subtlety of how
nvme-pci uses this interface...
Yes, this also looks weird and I had a TODO list entry for myself
to look into what is going on here. The whole interaction
with nvme_remove_namespaces is pretty weird to start with, and then
the code in PCIe is even more weird. But to feel confident to
touch this I'd need real hot removal testing, for which I don't
have a good rig right now.
Lets for start move the bit check up in the function and reverse
the polarity to return if it is set. Unless someone can make sense
of why this is OK.