=?eucgb2312_cn?q?=BB=D8=B8=B4=3A_Filesystem_offline_after_enabling_cephadm?=

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Javier,

It seems the MDS deployed by cephadm is 16.2.5. Please check your “container_image” config (should be quay.io/ceph/ceph:v16.2.7 if you are not running your own registry). Then redeploy the MDS daemons with “ceph orch redeploy <service_name>”, where <service_name> can be found by “ceph orch ls”. I guess it is “mds.cephfs”.

Weiwen Hu

发件人: Tecnologia Charne.Net<mailto:tecno@xxxxxxxxxx>
发送时间: 2021年12月29日 2:03
收件人: ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>
主题:  Filesystem offline after enabling cephadm

Today, I upgraded from Pacific 16.2.6 to 16.2.7.
Since some items in dashboard weren't enabled (Cluster->Hosts->Versions,
for example) because I haven't cephadm enabled, I activaded it and
adopting every mon, mgr, osd on cluster, following instructions in
https://docs.ceph.com/en/pacific/cephadm/adoption/

Everything was fine until point 10: Redeploy MDS daemons....


I have now:

# ceph health detail
HEALTH_ERR 1 filesystem is degraded; 1 filesystem has a failed mds
daemon; 1 filesystem is offline
[WRN] FS_DEGRADED: 1 filesystem is degraded
     fs cephfs is degraded
[WRN] FS_WITH_FAILED_MDS: 1 filesystem has a failed mds daemon
     fs cephfs has 2 failed mdss
[ERR] MDS_ALL_DOWN: 1 filesystem is offline
     fs cephfs is offline because no MDS is active for it.


# ceph fs status
cephfs - 0 clients
======
RANK  STATE   MDS  ACTIVITY  DNS  INOS  DIRS  CAPS
  0    failed
  1    failed
       POOL         TYPE     USED  AVAIL
cephfs_metadata  metadata  1344M  20.8T
   cephfs_data      data     530G  8523G
    STANDBY MDS
cephfs.mon1.qhueuv
cephfs.mon2.zrswzj
cephfs.mon3.cusflb
MDS version: ceph version 16.2.5-387-g7282d81d
(7282d81d2c500b5b0e929c07971b72444c6ac424) pacific (stable)


# ceph fs dump
e1777
enable_multiple, ever_enabled_multiple: 1,1
default compat: compat={},rocompat={},incompat={1=base v0.20,2=client
writeable ranges,3=default file layouts on dirs,4=dir inode in separate
object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds
uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
legacy client fscid: 1

Filesystem 'cephfs' (1)
fs_name  cephfs
epoch  1776
flags  12
created  2019-07-03T14:11:34.215467+0000
modified  2021-12-28T17:42:18.197012+0000
tableserver  0
root  0
session_timeout  60
session_autoclose  300
max_file_size  1099511627776
required_client_features  {}
last_failure  0
last_failure_osd_epoch  218775
compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable
ranges,3=default file layouts on dirs,4=dir inode in separate
object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds
uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
max_mds  1
in  0,1
up  {}
failed  0,1
damaged
stopped
data_pools  [14]
metadata_pool  13
inline_data  disabled
balancer
standby_count_wanted  1


Standby daemons:

[mds.cephfs.mon1.qhueuv{-1:21378633} state up:standby seq 1 join_fscid=1
addr
[v2:192.168.15.200:6800/3327091876,v1:192.168.15.200:6801/3327091876]
compat {c=[1],r=[1],i=[77f]}]
[mds.cephfs.mon2.zrswzj{-1:21384283} state up:standby seq 1 join_fscid=1
addr [v2:192.168.15.203:6800/838079265,v1:192.168.15.203:6801/838079265]
compat {c=[1],r=[1],i=[77f]}]
[mds.cephfs.mon3.cusflb{-1:21393659} state up:standby seq 1 join_fscid=1
addr
[v2:192.168.15.205:6800/1887883707,v1:192.168.15.205:6801/1887883707]
compat {c=[1],r=[1],i=[77f]}]
dumped fsmap epoch 1777


Any clue will be most welcomed!


Thanks in advance.


Javier.-



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux