Re: How are replicas spread in default crush configuration?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 23, 2016 at 4:11 PM, Chris Taylor <ctaylor@xxxxxxxxxx> wrote:
> Kevin,
>
> After changing the pool size to 3, make sure the min_size is set to 1 to
> allow 2 of the 3 hosts to be offline.

If you do this, the flip side is that while in that configuration
losing that single
host will render your data unrecoverable (writes were only witnessed by that
osd).

>
> http://docs.ceph.com/docs/master/rados/operations/pools/#set-pool-values
>
> How many MONs do you have and are they on the same OSD hosts? If you have 3
> MONs running on the OSD hosts and two go offline, you will not have a quorum
> of MONs and I/O will be blocked.
>
> I would also check your CRUSH map. I believe you want to make sure your
> rules have "step chooseleaf firstn 0 type host" and not "... type osd" so
> that replicas are on different hosts. I have not had to make that change
> before so you will want to read up on it first. Don't take my word for it.
>
> http://docs.ceph.com/docs/master/rados/operations/crush-map/#crush-map-parameters
>
> Hope that helps.
>
>
>
> Chris
>
>
>
> On 2016-11-23 1:32 pm, Kevin Olbrich wrote:
>
> Hi,
>
> just to make sure, as I did not find a reference in the docs:
> Are replicas spread across hosts or "just" OSDs?
>
> I am using a 5 OSD cluster (4 pools, 128 pgs each) with size = 2. Currently
> each OSD is a ZFS backed storage array.
> Now I installed a server which is planned to host 4x OSDs (and setting size
> to 3).
>
> I want to make sure we can resist two offline hosts (in terms of hardware).
> Is my assumption correct?
>
> Mit freundlichen Grüßen / best regards,
> Kevin Olbrich.
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux