Re: Cephfs - MDS all up:standby, not becoming up:active

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



在 2021年9月18日,22:50,Eric Dold <dold.eric@xxxxxxxxx> 写道:

Hi Patrick

Thanks a lot!

After setting
ceph fs compat cephfs add_incompat 7 "mds uses inline data"
the filesystem is working again.

So should I leave this setting as it is now, or do I have to remove it
again in a future update?

If I understand it[1] correctly, this change should be made by mon automatically if only 1 MDS in and standby_replay is disabled before initiating the upgrade. So you don’t need to remove it again. This change is a part of the normal upgrade process.

[1]: https://github.com/ceph/ceph/blob/v16.2.6/src/mds/FSMap.cc#L948

On Sat, Sep 18, 2021 at 2:28 AM Patrick Donnelly <pdonnell@xxxxxxxxxx>
wrote:

On Fri, Sep 17, 2021 at 6:57 PM Eric Dold <dold.eric@xxxxxxxxx> wrote:

Hi Patrick

Here's the output of ceph fs dump:

e226256
enable_multiple, ever_enabled_multiple: 0,1
default compat: compat={},rocompat={},incompat={1=base v0.20,2=client
writeable ranges,3=default file layouts on dirs,4=dir inode in separate
object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
anchor table,9=file layout v2,10=snaprealm v2}
legacy client fscid: 2

Filesystem 'cephfs' (2)
fs_name cephfs
epoch   226254
flags   12
created 2019-03-20T14:06:32.588328+0100
modified        2021-09-17T14:47:08.513192+0200
tableserver     0
root    0
session_timeout 60
session_autoclose       300
max_file_size   1099511627776
required_client_features        {}
last_failure    0
last_failure_osd_epoch  91941
compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable
ranges,3=default file layouts on dirs,4=dir inode in separate
object,5=mds
uses versioned encoding,6=dirfrag is stored in omap,8=no anchor
table,9=file layout v2,10=snaprealm v2}
max_mds 1
in      0,1
up      {}
failed  0,1

Run:

ceph fs compat add_incompat cephfs 7 "mds uses inline data"


It's interesting you're in the same situation (two ranks). Are you
using cephadm? If not, were you not aware of the MDS upgrade procedure
[1]?

[1] https://apac01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.ceph.com%2Fen%2Fpacific%2Fcephfs%2Fupgrading%2F&amp;data=04%7C01%7C%7C12b94a89ccc0423d038008d97ab3a15c%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637675734228536434%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=rRGHdY1EdmsHf7H8wsU9H%2Fc2JY1p8r07Zy%2FmxwSVz14%3D&amp;reserved=0

--
Patrick Donnelly, Ph.D.
He / Him / His
Principal Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux