Re: 100.000% pgs unknown

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Check your network, check what the difference is between the 2 osd's that started, check if /etc/ceph is readable. (and write normal plain text email)



> -----Original Message-----
> From: ·å <286204879@xxxxxx>
> Sent: Tuesday, 3 August 2021 09:37
> To: ceph-users <ceph-users@xxxxxxx>
> Subject:  100.000% pgs unknown
> 
> Hello, my ceph cluster has crashed, please help me£¡
> 
> #ceph -s
> &nbsp; cluster:
> &nbsp; &nbsp; id:&nbsp; &nbsp; &nbsp;5e1d9b55-9040-494a-9011-
> d73469dabe1a
> &nbsp; &nbsp; health: HEALTH_WARN
> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 1 filesystem is degraded
> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 1 MDSs report slow metadata
> IOs
> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 27 osds down
> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 7 hosts (28 osds) down
> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 1 root (28 osds) down
> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Reduced data availability:
> 1760 pgs inactive
> &nbsp;
> &nbsp; services:
> &nbsp; &nbsp; mon: 3 daemons, quorum ceph-m-51,ceph-m-52,ceph-m-53 (age
> 24h)
> &nbsp; &nbsp; mgr: ceph-m-51(active, since 4h)
> &nbsp; &nbsp; mds: cephfs:1/1 {0=ceph-m-51=up:replay} 2 up:standby
> &nbsp; &nbsp; osd: 30 osds: 2 up (since 24h), 29 in (since 24h)
> &nbsp;
> &nbsp; data:
> &nbsp; &nbsp; pools:&nbsp; &nbsp;7 pools, 1760 pgs
> &nbsp; &nbsp; objects: 0 objects, 0 B
> &nbsp; &nbsp; usage:&nbsp; &nbsp;0 B used, 0 B / 0 B avail
> &nbsp; &nbsp; pgs:&nbsp; &nbsp; &nbsp;100.000% pgs unknown
> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1760 unknown
> 
> 
> 
> #ceph osd tree
> ID&nbsp; CLASS WEIGHT&nbsp; &nbsp;TYPE NAME&nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp; &nbsp; &nbsp; STATUS REWEIGHT PRI-AFF&nbsp;
> -25&nbsp; &nbsp; &nbsp; &nbsp; 2.18320 root cache-ssd&nbsp; &nbsp;
> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
> -22&nbsp; &nbsp; &nbsp; &nbsp; 2.18320&nbsp; &nbsp; &nbsp;host ceph-
> node-47&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;
> &nbsp;24&nbsp; &nbsp;ssd&nbsp; 1.09160&nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp;osd.24&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;up&nbsp;
> 1.00000 1.00000&nbsp;
> &nbsp;25&nbsp; &nbsp;ssd&nbsp; 1.09160&nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp;osd.25&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;up&nbsp;
> 1.00000 1.00000&nbsp;
> &nbsp;-1&nbsp; &nbsp; &nbsp; &nbsp;27.44934 root default&nbsp; &nbsp;
> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
> -28&nbsp; &nbsp; &nbsp; &nbsp; 3.58989&nbsp; &nbsp; &nbsp;host ceph-
> node-110&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp;26&nbsp; &nbsp;hdd&nbsp; 0.90970&nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp;osd.26&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;down&nbsp; 0.79999
> 1.00000&nbsp;
> &nbsp;27&nbsp; &nbsp;hdd&nbsp; 0.90970&nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp;osd.27&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;down&nbsp; 0.79999
> 1.00000&nbsp;
> &nbsp;28&nbsp; &nbsp;hdd&nbsp; 0.90970&nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp;osd.28&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;down&nbsp; 0.79999
> 1.00000&nbsp;
> &nbsp;29&nbsp; &nbsp;hdd&nbsp; 0.86079&nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp;osd.29&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;down&nbsp; 0.79999
> 1.00000&nbsp;
> &nbsp;-3&nbsp; &nbsp; &nbsp; &nbsp;14.55475&nbsp; &nbsp; &nbsp;host
> ceph-node-54&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;
> &nbsp; 0&nbsp; &nbsp;hdd&nbsp; 3.63869&nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp;osd.0&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; down&nbsp; &nbsp;
> &nbsp; &nbsp; 0 1.00000&nbsp;
> &nbsp; 1&nbsp; &nbsp;hdd&nbsp; 3.63869&nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp;osd.1&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; down&nbsp; 1.00000
> 1.00000&nbsp;
> &nbsp; 2&nbsp; &nbsp;hdd&nbsp; 3.63869&nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp;osd.2&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; down&nbsp; 1.00000
> 1.00000&nbsp;
> &nbsp; 3&nbsp; &nbsp;hdd&nbsp; 3.63869&nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp;osd.3&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; down&nbsp; 1.00000
> 1.00000&nbsp;
> &nbsp;-5&nbsp; &nbsp; &nbsp; &nbsp; 1.76936&nbsp; &nbsp; &nbsp;host
> ceph-node-55&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;
> &nbsp; 4&nbsp; &nbsp;hdd&nbsp; 0.45479&nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp;osd.4&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; down&nbsp; 0.50000
> 1.00000&nbsp;
> &nbsp; 5&nbsp; &nbsp;hdd&nbsp; 0.45479&nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp;osd.5&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; down&nbsp; 0.50000
> 1.00000&nbsp;
> &nbsp; 6&nbsp; &nbsp;hdd&nbsp; 0.45479&nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp;osd.6&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; down&nbsp; 0.50000
> 1.00000&nbsp;
> &nbsp; 7&nbsp; &nbsp;hdd&nbsp; 0.40500&nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp;osd.7&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; down&nbsp; 0.50000
> 1.00000&nbsp;
> &nbsp;-7&nbsp; &nbsp; &nbsp; &nbsp; 2.22427&nbsp; &nbsp; &nbsp;host
> ceph-node-56&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;
> &nbsp; 8&nbsp; &nbsp;hdd&nbsp; 0.90970&nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp;osd.8&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; down&nbsp; 0.50000
> 1.00000&nbsp;
> &nbsp; 9&nbsp; &nbsp;hdd&nbsp; 0.45479&nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp;osd.9&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; down&nbsp; 0.50000
> 1.00000&nbsp;
> &nbsp;10&nbsp; &nbsp;hdd&nbsp; 0.45479&nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp;osd.10&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;down&nbsp; 0.50000
> 1.00000&nbsp;
> &nbsp;11&nbsp; &nbsp;hdd&nbsp; 0.40500&nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp;osd.11&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;down&nbsp; 0.50000
> 1.00000&nbsp;
> &nbsp;-9&nbsp; &nbsp; &nbsp; &nbsp; 1.77036&nbsp; &nbsp; &nbsp;host
> ceph-node-57&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;
> &nbsp;12&nbsp; &nbsp;hdd&nbsp; 0.45479&nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp;osd.12&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;down&nbsp; 0.50000
> 1.00000&nbsp;
> &nbsp;13&nbsp; &nbsp;hdd&nbsp; 0.45479&nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp;osd.13&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;down&nbsp; 0.50000
> 1.00000&nbsp;
> &nbsp;14&nbsp; &nbsp;hdd&nbsp; 0.45479&nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp;osd.14&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;down&nbsp; 0.50000
> 1.00000&nbsp;
> &nbsp;15&nbsp; &nbsp;hdd&nbsp; 0.40599&nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp;osd.15&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;down&nbsp; 0.50000
> 1.00000&nbsp;
> -11&nbsp; &nbsp; &nbsp; &nbsp; 1.77036&nbsp; &nbsp; &nbsp;host ceph-
> node-58&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;
> &nbsp;16&nbsp; &nbsp;hdd&nbsp; 0.45479&nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp;osd.16&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;down&nbsp; 0.50000
> 1.00000&nbsp;
> &nbsp;17&nbsp; &nbsp;hdd&nbsp; 0.45479&nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp;osd.17&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;down&nbsp; 0.50000
> 1.00000&nbsp;
> &nbsp;18&nbsp; &nbsp;hdd&nbsp; 0.45479&nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp;osd.18&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;down&nbsp; 0.50000
> 1.00000&nbsp;
> &nbsp;19&nbsp; &nbsp;hdd&nbsp; 0.40599&nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp;osd.19&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;down&nbsp; 0.50000
> 1.00000&nbsp;
> -13&nbsp; &nbsp; &nbsp; &nbsp; 1.77036&nbsp; &nbsp; &nbsp;host ceph-
> node-59&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;
> &nbsp;20&nbsp; &nbsp;hdd&nbsp; 0.45479&nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp;osd.20&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;down&nbsp; 0.50000
> 1.00000&nbsp;
> &nbsp;21&nbsp; &nbsp;hdd&nbsp; 0.45479&nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp;osd.21&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;down&nbsp; 0.50000
> 1.00000&nbsp;
> &nbsp;22&nbsp; &nbsp;hdd&nbsp; 0.45479&nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp;osd.22&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;down&nbsp; 0.50000
> 1.00000&nbsp;
> &nbsp;23&nbsp; &nbsp;hdd&nbsp; 0.40599&nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp;osd.23&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;down&nbsp; 0.50000
> 1.00000&nbsp;
> 
> 
> 
> #ceph fs dump
> e110784
> enable_multiple, ever_enabled_multiple: 0,0
> compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> ranges,3=default file layouts on dirs,4=dir inode in separate
> object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
> anchor table,9=file layout v2,10=snaprealm v2}
> legacy client fscid: 1
> &nbsp;
> Filesystem 'cephfs' (1)
> fs_name cephfs
> epoch&nbsp; &nbsp;110784
> flags&nbsp; &nbsp;32
> created 2020-09-18 11:05:47.190096
> modified&nbsp; &nbsp; &nbsp; &nbsp; 2021-08-03 15:33:04.379969
> tableserver&nbsp; &nbsp; &nbsp;0
> root&nbsp; &nbsp; 0
> session_timeout 60
> session_autoclose&nbsp; &nbsp; &nbsp; &nbsp;300
> max_file_size&nbsp; &nbsp;1099511627776
> min_compat_client&nbsp; &nbsp; &nbsp; &nbsp;-1 (unspecified)
> last_failure&nbsp; &nbsp; 0
> last_failure_osd_epoch&nbsp; 18808
> compat&nbsp; compat={},rocompat={},incompat={1=base v0.20,2=client
> writeable ranges,3=default file layouts on dirs,4=dir inode in separate
> object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
> anchor table,9=file layout v2,10=snaprealm v2}
> max_mds 1
> in&nbsp; &nbsp; &nbsp; 0
> up&nbsp; &nbsp; &nbsp; {0=554577}
> failed
> damaged
> stopped
> data_pools&nbsp; &nbsp; &nbsp; [13]
> metadata_pool&nbsp; &nbsp;14
> inline_data&nbsp; &nbsp; &nbsp;disabled
> balancer
> standby_count_wanted&nbsp; &nbsp; 1
> [mds.ceph-m-51{0:554577} state up:replay seq 37643 addr
> [v2:192.168.221.51:6800/1640891715,v1:192.168.221.51:6801/1640891715]]
> 
> 
> Standby daemons:
> [mds.ceph-m-52{-1:554746} state up:standby seq 2 addr
> [v2:192.168.221.52:6800/1652218221,v1:192.168.221.52:6801/1652218221]]
> [mds.ceph-m-53{-1:554789} state up:standby seq 2 addr
> [v2:192.168.221.53:6800/1236333507,v1:192.168.221.53:6801/1236333507]]
> dumped fsmap epoch 110784
> 
> 
> 
> 
> 
> When I start osd, the error message£º¡°
> 8ÔÂ 03 15:05:16 ceph-node-54 ceph-osd[101387]: 2021-08-03 15:05:16.337
> 7f4ac0702700 -1 monclient(hunting): handle_auth_bad_method server
> allowed_methods [2] but i only support [2]
> 8ÔÂ 03 15:05:16 ceph-node-54 ceph-osd[101387]: 2021-08-03 15:05:16.337
> 7f4abff01700 -1 monclient(hunting): handle_auth_bad_method server
> allowed_methods [2] but i only support [2]
> 8ÔÂ 03 15:05:16 ceph-node-54 ceph-osd[101387]: failed to fetch mon
> config (--no-mon-config to skip)
> 
> ¡±
> Every osd is like this£¡
> I exported the crush map and found no abnormalities. How can I fix it
> next?
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux