Re: Cluster Map Problems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We need a bit more information. If you can do: "ceph osd dump", "ceph
osd tree", and paste your ceph conf, we might get a bit further. The
CRUSH hierarchy looks okay. I can't see the replica size from this
though.

Have you followed this procedure to see if your object is getting
remapped? http://ceph.com/docs/master/rados/operations/monitoring-osd-pg/#finding-an-object-location

On Thu, Mar 21, 2013 at 12:02 PM, Martin Mailand <martin@xxxxxxxxxxxx> wrote:
> Hi,
>
> I want to change my crushmap to reflect my setup, I have two racks with
> each 3 hosts. I want to use for the rbd pool a replication size of 2.
> The failure domain should be the rack, so each replica should be in each
> rack. That works so far.
> But if I shutdown a host the clusters stays degraded, but I want that
> the now missing replicas get replicated to the two remaining hosts in
> this rack.
>
> Here is crushmap.
> http://pastebin.com/UaB6LfKs
>
> Any idea what I did wrong?
>
> -martin
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
John Wilkins
Senior Technical Writer
Intank
john.wilkins@xxxxxxxxxxx
(415) 425-9599
http://inktank.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux