OSD will not start

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have an OSD that went down while the cluster was recovering from another OSD being reweighted. The cluster appears to be stuck in recovery since the number of degraded and misplaced objects are not decreasing.

It is a three node cluster in production and the pool size is 2. Ceph version 94.3.

Here is a snip of the failing OSD log. The full log file was uploaded with ceph-post-file "ceph-post-file: dfcf6dff-11cb-49b0-81b8-60bf8ff898eb".

2015-10-11 10:45:06.182615 7f9270567900 20 read_log 254342'14799 (251922'9664) delete d1aa1484/rb.0.ac3386.238e1f29.0000000bb0ad/44//8 by unknown.0.0:0 2015-10-11 00:40:34.981049
2015-10-11 10:45:06.182629 7f9270567900 20 read_log 254342'14800 (251922'9665) modify d1aa1484/rb.0.ac3386.238e1f29.0000000bb0ad/head//8 by unknown.0.0:0 2015-10-11 00:40:34.981049
2015-10-11 10:45:06.182661 7f9270567900 20 read_log 6 divergent_priors
2015-10-11 10:45:06.184076 7f9270567900 10 read_log checking for missing items over interval (0'0,254342'14800]
2015-10-11 10:45:11.861683 7f9270567900 15 read_log missing 251925'9669,e9ea1484/rb.0.ac3386.238e1f29.000000187097/head//8
2015-10-11 10:45:11.861767 7f9270567900 15 read_log missing 251925'9668,e9ea1484/rb.0.ac3386.238e1f29.000000187097/44//8
2015-10-11 10:45:11.861823 7f9270567900 15 read_log missing 251925'9667,c4ea1484/rb.0.ac3386.238e1f29.00000022717d/head//8
2015-10-11 10:45:11.861877 7f9270567900 15 read_log missing 251925'9666,c4ea1484/rb.0.ac3386.238e1f29.00000022717d/68//8
2015-10-11 10:45:11.924425 7f9270567900 -1 osd/PGLog.cc: In function 'static void PGLog::read_log(ObjectStore*, coll_t, coll_t, ghobject_t, const pg_info_t&, std::map<eversion_t, hobject_t>&, PGLog::IndexedLog&, pg_missing_t&, std::ostringstream&, std::set<std::basic_string<char> >*)' thread 7f9270567900 time 2015-10-11 10:45:11.861976
osd/PGLog.cc: 962: FAILED assert(oi.version == i->first)

 

cluster d960d672-e035-413d-ba39-8341f4131760
health HEALTH_WARN
54 pgs backfill
373 pgs degraded
1 pgs recovering
336 pgs recovery_wait
373 pgs stuck degraded
391 pgs stuck unclean
43 pgs stuck undersized
43 pgs undersized
recovery 88034/14758314 objects degraded (0.597%)
recovery 280423/14758314 objects misplaced (1.900%)
recovery 28/7330234 unfound (0.000%)
monmap e1: 3 mons at {ceph-mon1=10.20.0.11:6789/0,ceph-mon2=10.20.0.12:6789/0,ceph-mon3=10.20.0.13:6789/0}
election epoch 6010, quorum 0,1,2 ceph-mon1,ceph-mon2,ceph-mon3
osdmap e256816: 46 osds: 45 up, 45 in; 65 remapped pgs
pgmap v19715504: 5184 pgs, 4 pools, 28323 GB data, 7158 kobjects
57018 GB used, 23027 GB / 80045 GB avail
88034/14758314 objects degraded (0.597%)
280423/14758314 objects misplaced (1.900%)
28/7330234 unfound (0.000%)
4790 active+clean
326 active+recovery_wait+degraded
36 active+undersized+degraded+remapped+wait_backfill
18 active+remapped+wait_backfill
6 active+recovery_wait+undersized+degraded+remapped
4 active+recovery_wait+degraded+remapped
3 active+clean+scrubbing+deep
1 active+recovering+undersized+degraded+remapped
client io 11627 kB/s rd, 36433 B/s wr, 10 op/s

 

Any help would be greatly appreciated!

Thanks,

Chris

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux