There are plenty of data in this cluster (2PB), please help us, thx. Before doing this dangerous operations(http://docs.ceph.com/docs/master/cephfs/disaster-recovery-experts/#disaster-recovery-experts); , any suggestions? Ceph version: 12.2.12 ceph fs status: cephfs - 1057 clients ====== +------+---------+-------------+----------+-------+-------+ | Rank | State | MDS | Activity | dns | inos | +------+---------+-------------+----------+-------+-------+ | 0 | failed | | | | | | 1 | resolve | n31-023-214 | | 0 | 0 | | 2 | resolve | n31-023-215 | | 0 | 0 | | 3 | resolve | n31-023-218 | | 0 | 0 | | 4 | resolve | n31-023-220 | | 0 | 0 | | 5 | resolve | n31-023-217 | | 0 | 0 | | 6 | resolve | n31-023-222 | | 0 | 0 | | 7 | resolve | n31-023-216 | | 0 | 0 | | 8 | resolve | n31-023-221 | | 0 | 0 | | 9 | resolve | n31-023-223 | | 0 | 0 | | 10 | resolve | n31-023-225 | | 0 | 0 | | 11 | resolve | n31-023-224 | | 0 | 0 | | 12 | resolve | n31-023-219 | | 0 | 0 | | 13 | resolve | n31-023-229 | | 0 | 0 | +------+---------+-------------+----------+-------+-------+ +-----------------+----------+-------+-------+ | Pool | type | used | avail | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 2843M | 34.9T | | cephfs_data | data | 2580T | 731T | +-----------------+----------+-------+-------+ +-------------+ | Standby MDS | +-------------+ | n31-023-227 | | n31-023-226 | | n31-023-228 | +-------------+ ceph fs dump: dumped fsmap epoch 22712 e22712 enable_multiple, ever_enabled_multiple: 0,0 compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2} legacy client fscid: 1 Filesystem 'cephfs' (1) fs_name cephfs epoch 22711 flags 4 created 2018-11-30 10:05:06.015325 modified 2019-06-19 23:37:41.400961 tableserver 0 root 0 session_timeout 60 session_autoclose 300 max_file_size 1099511627776 last_failure 0 last_failure_osd_epoch 22246 compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2} max_mds 14 in 0,1,2,3,4,5,6,7,8,9,10,11,12,13 up {1=31684663,2=31684674,3=31684576,4=31684673,5=31684678,6=31684612,7=31684688,8=31684683,9=31684698,10=31684695,11=31684693,12=31684586,13=31684617} failed damaged 0 stopped data_pools [2] metadata_pool 1 inline_data disabled balancer standby_count_wanted 1 31684663: 10.31.23.214:6800/829459839 'n31-023-214' mds.1.22682 up:resolve seq 6 31684674: 10.31.23.215:6800/2483123757 'n31-023-215' mds.2.22683 up:resolve seq 3 31684576: 10.31.23.218:6800/3381299029 'n31-023-218' mds.3.22683 up:resolve seq 3 31684673: 10.31.23.220:6800/3540255817 'n31-023-220' mds.4.22685 up:resolve seq 3 31684678: 10.31.23.217:6800/4004537495 'n31-023-217' mds.5.22689 up:resolve seq 3 31684612: 10.31.23.222:6800/1482899141 'n31-023-222' mds.6.22691 up:resolve seq 3 31684688: 10.31.23.216:6800/820115186 'n31-023-216' mds.7.22693 up:resolve seq 3 31684683: 10.31.23.221:6800/1996416037 'n31-023-221' mds.8.22693 up:resolve seq 3 31684698: 10.31.23.223:6800/2807778042 'n31-023-223' mds.9.22695 up:resolve seq 3 31684695: 10.31.23.225:6800/101451176 'n31-023-225' mds.10.22702 up:resolve seq 3 31684693: 10.31.23.224:6800/1597373084 'n31-023-224' mds.11.22695 up:resolve seq 3 31684586: 10.31.23.219:6800/3640206080 'n31-023-219' mds.12.22695 up:resolve seq 3 31684617: 10.31.23.229:6800/3511814011 'n31-023-229' mds.13.22697 up:resolve seq 3 Standby daemons: 31684637: 10.31.23.227:6800/1987867930 'n31-023-227' mds.-1.0 up:standby seq 2 31684690: 10.31.23.226:6800/3695913629 'n31-023-226' mds.-1.0 up:standby seq 2 31689991: 10.31.23.228:6800/2624666750 'n31-023-228' mds.-1.0 up:standby seq 2 _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com