On Wed, Sep 28, 2011 at 7:14 AM, Jens Axboe <axboe@xxxxxxxxx> wrote: > > /* > - * Note: If a driver supplied the queue lock, it should not zap that lock > - * unexpectedly as some queue cleanup components like elevator_exit() and > - * blk_throtl_exit() need queue lock. > + * Note: If a driver supplied the queue lock, it is disconnected > + * by this function. The actual state of the lock doesn't matter > + * here as the request_queue isn't accessible after this point > + * (QUEUE_FLAG_DEAD is set) and no other requests will be queued. > */ So quite frankly, I just don't believe in that comment. If no more requests will be queued or completed, then the queue lock is irrelevant and should not be changed. More importantly, if no more requests are queued or completed after blk_cleanup_queue(), then we wouldn't have had the bug that we clearly had with the elevator accesses, now would we? So the comment seems to be obviously bogus and wrong. I pulled this, but I think the "just move the teardown" would have been the safer option. What happens if a request completes on another CPU just as we are changing locks, and we lock one lock and then unlock another?! Linus -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html