Re: Reboot 1 OSD server, now ceph says 60% misplaced?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Nov 19, 2017 at 8:43 PM Tracy Reed <treed@xxxxxxxxxxxxxxx> wrote:
One of my 9 ceph osd nodes just spontaneously rebooted.

This particular osd server only holds 4% of total storage.

Why, after it has come back up and rejoined the cluster, does ceph
health say that 60% of my objects are misplaced?  I'm wondering if I
have something setup wrong in my cluster. This cluster has been
operating well for the most part for about a year but I have noticed
this sort of behavior before. This is going to take many hours to
recover. Ceph 10.2.3.

Thanks for any insights you may be able to provide!

Can you include the results of "ceph osd dump" and your crush map?

It sounds rather as if your OSDs moved themselves in the crush map when they rebooted. I'm not aware of any reason that should happen in Jewel (although some people experienced it on upgrade to Luminous if they had oddly-configured clusters).
 

--
Tracy Reed
http://tracyreed.org
Digital signature attached for your safety.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux