On Wed, Sep 7, 2011 at 12:37, Amit Sahrawat <amit.sahrawat83@xxxxxxxxx> wrote: > I know that lazy umount was designed keeping in mind that the > mountpoint is not accesible to all future I/O but for the ongoing I/O > it will continute to work. It is only after the I/O is finished that > the umount will actually occur. But this can be tricky at times > considering there are situations where the operation will continue to > be executed than what is expected duration, and you cannot unplug the > device during that period because there are chances of filesystem > corruption on doing so. > Is there anything which could be done in this context? because simply > reading the fd-table and closing fd's will not serve the purpose and > there is every chance of a OOPs occuring due to this closing. > Signalling from this point to the all the process's with open fd's on > that mountpoint to close fd i.e., handling needs to be done from the > user space applications...? Does this make sense > > Please through some insight into this. I am not looking for exact > solution it is just mere opinion's on this that can add to this. Essentially what you want here is a 'forced unmount' option. It's difficult to do this directly in the existing VFS model; you'd need to essentially change the operations structure for all open files/inodes for that filesystem in a race-free manner, _and_ wait for any outstanding operations to complete. The VFS isn't really designed to support something like this. What you could try doing, however, is creating a wrapper filesystem - one that redirects all requests to an underlying filesystem, but supports an operation to: 1) Make all future requests fail with -EIO 2) Invalidate any existing VMA mappings 3) Wait for all outstanding requests to complete 4) Unmount (ie, unreference) the underlying filesystem This will result in some overhead, of course, but would seem to be the safest route. -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html