Re: [PATCH 4/8] ceph: allow remounting aborted mount

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jun 18, 2019 at 1:30 AM Jeff Layton <jlayton@xxxxxxxxxx> wrote:
>
> On Mon, 2019-06-17 at 20:55 +0800, Yan, Zheng wrote:
> > When remounting aborted mount, also reset client's entity addr.
> > 'umount -f /ceph; mount -o remount /ceph' can be used for recovering
> > from blacklist.
> >
>
> Why do I need to umount here? Once the filesystem is unmounted, then the
> '-o remount' becomes superfluous, no? In fact, I get an error back when
> I try to remount an unmounted filesystem:
>
>     $ sudo umount -f /mnt/cephfs ; sudo mount -o remount /mnt/cephfs
>     mount: /mnt/cephfs: mount point not mounted or bad option.
>
> My client isn't blacklisted above, so I guess you're counting on the
> umount returning without having actually unmounted the filesystem?
>
> I think this ought to not need a umount first. From a UI standpoint,
> just doing a "mount -o remount" ought to be sufficient to clear this.
>
> Also, how would an admin know that this is something they ought to try?
> Is there a way for them to know that their client has been blacklisted?
In our deployment, we actually capture the blacklist event and convert
that to a customer facing event to let them know their client(s) has
been blacklisted.  Upon receiving such notification, they can
reconnect clients to MDS and minimize the down time.
>
> > Signed-off-by: "Yan, Zheng" <zyan@xxxxxxxxxx>
> > ---
> >  fs/ceph/mds_client.c | 16 +++++++++++++---
> >  fs/ceph/super.c      | 23 +++++++++++++++++++++--
> >  2 files changed, 34 insertions(+), 5 deletions(-)
> >
> > diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
> > index 19c62cf7d5b8..188c33709d9a 100644
> > --- a/fs/ceph/mds_client.c
> > +++ b/fs/ceph/mds_client.c
> > @@ -1378,9 +1378,12 @@ static int remove_session_caps_cb(struct inode *inode, struct ceph_cap *cap,
> >               struct ceph_cap_flush *cf;
> >               struct ceph_mds_client *mdsc = fsc->mdsc;
> >
> > -             if (ci->i_wrbuffer_ref > 0 &&
> > -                 READ_ONCE(fsc->mount_state) == CEPH_MOUNT_SHUTDOWN)
> > -                     invalidate = true;
> > +             if (READ_ONCE(fsc->mount_state) == CEPH_MOUNT_SHUTDOWN) {
> > +                     if (inode->i_data.nrpages > 0)
> > +                             invalidate = true;
> > +                     if (ci->i_wrbuffer_ref > 0)
> > +                             mapping_set_error(&inode->i_data, -EIO);
> > +             }
> >
> >               while (!list_empty(&ci->i_cap_flush_list)) {
> >                       cf = list_first_entry(&ci->i_cap_flush_list,
> > @@ -4350,7 +4353,12 @@ void ceph_mdsc_force_umount(struct ceph_mds_client *mdsc)
> >               session = __ceph_lookup_mds_session(mdsc, mds);
> >               if (!session)
> >                       continue;
> > +
> > +             if (session->s_state == CEPH_MDS_SESSION_REJECTED)
> > +                     __unregister_session(mdsc, session);
> > +             __wake_requests(mdsc, &session->s_waiting);
> >               mutex_unlock(&mdsc->mutex);
> > +
> >               mutex_lock(&session->s_mutex);
> >               __close_session(mdsc, session);
> >               if (session->s_state == CEPH_MDS_SESSION_CLOSING) {
> > @@ -4359,9 +4367,11 @@ void ceph_mdsc_force_umount(struct ceph_mds_client *mdsc)
> >               }
> >               mutex_unlock(&session->s_mutex);
> >               ceph_put_mds_session(session);
> > +
> >               mutex_lock(&mdsc->mutex);
> >               kick_requests(mdsc, mds);
> >       }
> > +
> >       __wake_requests(mdsc, &mdsc->waiting_for_map);
> >       mutex_unlock(&mdsc->mutex);
> >  }
> > diff --git a/fs/ceph/super.c b/fs/ceph/super.c
> > index 67eb9d592ab7..a6a3c065f697 100644
> > --- a/fs/ceph/super.c
> > +++ b/fs/ceph/super.c
> > @@ -833,8 +833,27 @@ static void ceph_umount_begin(struct super_block *sb)
> >
> >  static int ceph_remount(struct super_block *sb, int *flags, char *data)
> >  {
> > -     sync_filesystem(sb);
> > -     return 0;
> > +     struct ceph_fs_client *fsc = ceph_sb_to_client(sb);
> > +
> > +     if (fsc->mount_state != CEPH_MOUNT_SHUTDOWN) {
> > +             sync_filesystem(sb);
> > +             return 0;
> > +     }
> > +
> > +     /* Make sure all page caches get invalidated.
> > +      * see remove_session_caps_cb() */
> > +     flush_workqueue(fsc->inode_wq);
> > +     /* In case that we were blacklisted. This also reset
> > +      * all mon/osd connections */
> > +     ceph_reset_client_addr(fsc->client);
> > +
> > +     ceph_osdc_clear_abort_err(&fsc->client->osdc);
> > +     fsc->mount_state = 0;
> > +
> > +     if (!sb->s_root)
> > +             return 0;
> > +     return __ceph_do_getattr(d_inode(sb->s_root), NULL,
> > +                              CEPH_STAT_CAP_INODE, true);
> >  }
> >
> >  static const struct super_operations ceph_super_ops = {
>
> --
> Jeff Layton <jlayton@xxxxxxxxxx>
>


-- 
Regards
Huang Zhiteng



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Ceph Dev]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux