On Sat, May 12, 2018 at 1:38 AM, Yan, Zheng <zyan@xxxxxxxxxx> wrote: > > >> On May 11, 2018, at 20:06, Ilya Dryomov <idryomov@xxxxxxxxx> wrote: >> >> On Fri, May 11, 2018 at 11:12 AM, Yan, Zheng <zyan@xxxxxxxxxx> wrote: >>> this avoid force umount getting stuck at ceph_osdc_sync() >>> >>> Signed-off-by: "Yan, Zheng" <zyan@xxxxxxxxxx> >>> --- >>> fs/ceph/super.c | 1 + >>> include/linux/ceph/osd_client.h | 5 ++++- >>> net/ceph/osd_client.c | 43 ++++++++++++++++++++++++++++++++++++----- >>> 3 files changed, 43 insertions(+), 6 deletions(-) >>> >>> diff --git a/fs/ceph/super.c b/fs/ceph/super.c >>> index 3c1155803444..40664e13cc0f 100644 >>> --- a/fs/ceph/super.c >>> +++ b/fs/ceph/super.c >>> @@ -793,6 +793,7 @@ static void ceph_umount_begin(struct super_block *sb) >>> if (!fsc) >>> return; >>> fsc->mount_state = CEPH_MOUNT_SHUTDOWN; >>> + ceph_osdc_abort_requests(&fsc->client->osdc, -EIO); >>> ceph_mdsc_force_umount(fsc->mdsc); >>> return; >>> } >>> diff --git a/include/linux/ceph/osd_client.h b/include/linux/ceph/osd_client.h >>> index b73dd7ebe585..f61736963236 100644 >>> --- a/include/linux/ceph/osd_client.h >>> +++ b/include/linux/ceph/osd_client.h >>> @@ -347,6 +347,7 @@ struct ceph_osd_client { >>> struct rb_root linger_map_checks; >>> atomic_t num_requests; >>> atomic_t num_homeless; >>> + int abort_code; >> >> Why osdc->abort_code and all __submit_request() hunks are needed? >> If we are in a forced umount situation, no new I/Os should be accepted >> anyway. > > No code guarantees that ceph_writepages_start()/writepage_nounlock() are > not being executed when user does forced umount. They may start new > osd requests after forced umount. I haven't traced through forced umount steps, but it seems like there must be a point where we stop accepting requests and attempt to quiesce the state. The patch talks about avoiding getting stuck in ceph_osdc_sync(). Is it guaranteed that no new OSD requests can be started after it completes? Thanks, Ilya -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html