On Fri, Nov 20, 2015 at 12:05:11AM +0000, Williams, Dan J wrote: > On Fri, 2015-11-20 at 10:17 +1100, Dave Chinner wrote: > > Actually, I think we need to trigger a filesystem shutdown before > > doing anything else (e.g. before unmapping the inodes).That way the > > filesystem will error out new calls to get_blocks() and prevent any > > new IO submission while the block device teardown and inode > > unmapping is done. i.e. solving the problem at the block device > > level is hard, but we already have all the necessary infrastructure > > to shut off IO and new block mappings at the filesystem level.... > > > > Shutting down the filesystem on block_device remove seems a more > invasive behavior change from what we've historically done. I've heard that so many times I figured that would be your answer. yet we've got a clear situation where we have a race between file level access and block device operations that is clearly solved by doing an upfront filesystem shutdown on unplug, but still the answer is "ignore the filesystem, we need to do everything in the block layer, no matter how hard or complex it is to solve"... > I.e. a > best effort "let the fs struggle on to service whatever it can that is > not dependent on new disk I/O". And so we still have this limbo fs state that is an utter nightmare to handle sanely. We don't know what the cause of the IO error are and so we have to try to handle them as though we can recover in some way from the error. Only when we get an error we can't possibly recover from do we shut the fileystem down and then stop all attempts at issuing IO, mapping page faults, etc. However, if the device has been unplugged then we *can never recover* and so continuing on with out eyes closed and fingers in our eyes shouting 'lalalalalalala" as loud as we can won't change the fact that we are going to shut down the filesystem in the near future. -Dave. -- Dave Chinner david@xxxxxxxxxxxxx -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html