On Tue, Jun 4, 2019 at 5:25 PM Ilya Dryomov <idryomov@xxxxxxxxx> wrote: > > On Tue, Jun 4, 2019 at 4:10 AM Yan, Zheng <ukernel@xxxxxxxxx> wrote: > > > > On Tue, Jun 4, 2019 at 5:18 AM Ilya Dryomov <idryomov@xxxxxxxxx> wrote: > > > > > > On Mon, Jun 3, 2019 at 10:23 PM Gregory Farnum <gfarnum@xxxxxxxxxx> wrote: > > > > > > > > On Mon, Jun 3, 2019 at 1:07 PM Ilya Dryomov <idryomov@xxxxxxxxx> wrote: > > > > > Can we also discuss how useful is allowing to recover a mount after it > > > > > has been blacklisted? After we fail everything with EIO and throw out > > > > > all dirty state, how many applications would continue working without > > > > > some kind of restart? And if you are restarting your application, why > > > > > not get a new mount? > > > > > > > > > > IOW what is the use case for introducing a new debugfs knob that isn't > > > > > that much different from umount+mount? > > > > > > > > People don't like it when their filesystem refuses to umount, which is > > > > what happens when the kernel client can't reconnect to the MDS right > > > > now. I'm not sure there's a practical way to deal with that besides > > > > some kind of computer admin intervention. (Even if you umount -l, that > > > > by design doesn't reply to syscalls and let the applications exit.) > > > > > > Well, that is what I'm saying: if an admin intervention is required > > > anyway, then why not make it be umount+mount? That is certainly more > > > intuitive than an obscure write-only file in debugfs... > > > > > > > I think 'umount -f' + 'mount -o remount' is better than the debugfs file > > Why '-o remount'? I wouldn't expect 'umount -f' to leave behind any > actionable state, it should tear down all data structures, mount point, > etc. What would '-o remount' act on? > If mount point is in use, 'umount -f ' only closes mds sessions and aborts osd requests. Mount point is still there, any operation on it will return -EIO. The remount change the mount point back to normal state. > Thanks, > > Ilya