Reboot 1 OSD server, now ceph says 60% misplaced?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



One of my 9 ceph osd nodes just spontaneously rebooted.  

This particular osd server only holds 4% of total storage. 

Why, after it has come back up and rejoined the cluster, does ceph
health say that 60% of my objects are misplaced?  I'm wondering if I
have something setup wrong in my cluster. This cluster has been
operating well for the most part for about a year but I have noticed
this sort of behavior before. This is going to take many hours to
recover. Ceph 10.2.3.

Thanks for any insights you may be able to provide!

-- 
Tracy Reed
http://tracyreed.org
Digital signature attached for your safety.

Attachment: signature.asc
Description: PGP signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux