Re: Brand new cluster -- pg is stuck inactive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Not sure if anyone has noticed this yet, but I see your osd tree does not include hosts level - you get OSDs right under the root bucket. Default crush rule would make sure to allocate OSDs from different hosts - and there are no hosts in hierarchy.

OSD would usually put itself under the hostname in hierarchy on restart, maybe you have some issues with hostname resolution. 

Try to check the output of:
ceph osd find osd.0

Does it find the right host osd belongs to?

Make sure you DON'T have the following line in your ceph.conf:
osd crush update on start = false

Check here: http://docs.ceph.com/docs/master/rados/operations/crush-map/#crush-location

Regards,
Anthony

----- Original Message -----
> From: "dE" <de.techno@xxxxxxxxx>
> To: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>, ronny+ceph-users@xxxxxxxx
> Sent: Friday, October 13, 2017 2:43:54 PM
> Subject: Re:  Brand new cluster -- pg is stuck inactive
> 
> 
> 
> Sorry, mails bounced.
> 
> ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
> -1 0 root default
> 0 0 osd.0 up 1.00000 1.00000
> 1 0 osd.1 up 1.00000 1.00000
> 2 0 osd.2 up 1.00000 1.0000
> 
> Maybe because I got 2.9GB left onin the osd directory, but I dont see
> any OSD_NEARFULL
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux