Re: Cephfs - MDS all up:standby, not becoming up:active

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Patrick

Thanks a lot!

After setting
ceph fs compat cephfs add_incompat 7 "mds uses inline data"
the filesystem is working again.

So should I leave this setting as it is now, or do I have to remove it
again in a future update?

On Sat, Sep 18, 2021 at 2:28 AM Patrick Donnelly <pdonnell@xxxxxxxxxx>
wrote:

> On Fri, Sep 17, 2021 at 6:57 PM Eric Dold <dold.eric@xxxxxxxxx> wrote:
> >
> > Hi Patrick
> >
> > Here's the output of ceph fs dump:
> >
> > e226256
> > enable_multiple, ever_enabled_multiple: 0,1
> > default compat: compat={},rocompat={},incompat={1=base v0.20,2=client
> > writeable ranges,3=default file layouts on dirs,4=dir inode in separate
> > object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
> > anchor table,9=file layout v2,10=snaprealm v2}
> > legacy client fscid: 2
> >
> > Filesystem 'cephfs' (2)
> > fs_name cephfs
> > epoch   226254
> > flags   12
> > created 2019-03-20T14:06:32.588328+0100
> > modified        2021-09-17T14:47:08.513192+0200
> > tableserver     0
> > root    0
> > session_timeout 60
> > session_autoclose       300
> > max_file_size   1099511627776
> > required_client_features        {}
> > last_failure    0
> > last_failure_osd_epoch  91941
> > compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> > ranges,3=default file layouts on dirs,4=dir inode in separate
> object,5=mds
> > uses versioned encoding,6=dirfrag is stored in omap,8=no anchor
> > table,9=file layout v2,10=snaprealm v2}
> > max_mds 1
> > in      0,1
> > up      {}
> > failed  0,1
>
> Run:
>
> ceph fs compat add_incompat cephfs 7 "mds uses inline data"
>
>
> It's interesting you're in the same situation (two ranks). Are you
> using cephadm? If not, were you not aware of the MDS upgrade procedure
> [1]?
>
> [1] https://docs.ceph.com/en/pacific/cephfs/upgrading/
>
> --
> Patrick Donnelly, Ph.D.
> He / Him / His
> Principal Software Engineer
> Red Hat Sunnyvale, CA
> GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux