forcing an osd down

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I noticed this has happened before, this time I can't get it to stay down at all, it just keeps coming back up:

# ceph osd down osd.48
marked down osd.48.

# ceph osd tree |grep osd.48
48   3.64000         osd.48         down        0          1.00000

# ceph osd tree |grep osd.48
48   3.64000         osd.48           up        0          1.00000



health HEALTH_WARN
            2 pgs backfilling
            1 pgs degraded
            2 pgs stuck unclean
            recovery 18/164089686 objects degraded (0.000%)
            recovery 1467405/164089686 objects misplaced (0.894%)
     monmap e1: 3 mons at {0=192.168.4.10:6789/0,1=192.168.4.11:6789/0,2=192.168.4.12:6789/0}
            election epoch 210, quorum 0,1,2 0,1,2
     mdsmap e166: 1/1/1 up {0=0=up:active}, 2 up:standby
     osdmap e25733: 45 osds: 45 up, 44 in; 2 remapped pgs


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux