Re: Even number of replicas?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Nico,

 No 2 data centers.

- We use size=4
- our ceph map is configured with OSD's assigned to 2 separate data center
locations, so we end up with 2 OSD's in use from in each DC
- min_size=2.
- we have (1) monitor in each DC
- we have a 3rd monitor that is in a 3rd DC and has a VPN connection to each
of the other 2 DC's to provide a tie-breaker for the monitors.

You would also need to change your ceph rules to indicate you want
datacenter and host redundancy, that part of ours looks like this:

# rules
rule replicated_ruleset {
        id 0
        type replicated
        min_size 1
        max_size 10
        step take default
        step choose firstn 2 type datacenter
        step chooseleaf firstn 2 type host
        step emit
}

We run dedicated fiber between the 2 DC's, and so with this config we can
lose an entire DC (or have a fiber cut) and our ceph cluster will still be
able to serve data to VM's located either on each side (if there is a fiber
cut), or we can restart VM's that were running in the DC that is down on
spare hardware in the other location - so long as we don't also lose access
to our tie-breaker ceph monitor. If that happens.. I think we are screwed :/
.. but it uses the internet for those VPN connections.. so we would also
have to have no internet access at the same time and both sides are
redundant internet wise.

If anyone sees I hole in my config here, please feel free to correct me :)
I'd hate to find out the hard way lol.

Cheers,
D.

-----Original Message-----
From: Nico Schottelius [mailto:nico.schottelius@xxxxxxxxxxx] 
Sent: Friday, March 25, 2022 12:58 PM
To: Wolfpaw - Dale Corse <dale@xxxxxxxxxxx>
Cc: ceph-users@xxxxxxx
Subject:  Re: Even number of replicas?


Hey Dale,

are you distributing your clusters over 4 DCs via dark fiber or can you
elaborate on your setup?

We are currently running 3/2, but each cluster is isolated in its own DC.

Best,

Nico

"Wolfpaw - Dale Corse" <dale@xxxxxxxxxxx> writes:

> Hi George,
>
>   We use 4/2 for our deployment and it works fine - but it's a huge 
> waste of space :)
>
> Our reason is because we want to be able to lose a data center and 
> still have ceph running. You could accomplish that with size=1 on an 
> emergency basis, but we didn't like the redundancy loss.
>
> Cheers,
> D.
>
> -----Original Message-----
> From: Kyriazis, George [mailto:george.kyriazis@xxxxxxxxx]
> Sent: Friday, March 25, 2022 9:54 AM
> To: ceph-users <ceph-users@xxxxxxx>
> Subject:  Even number of replicas?
>
> Hello ceph-users,
>
> I was wondering if it is good practice to have an even number of 
> replicas in a replicated pool.  For example, have size=4 and min_size=2.
>
> Thank you!
>
> George
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
> email to ceph-users-leave@xxxxxxx
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
> email to ceph-users-leave@xxxxxxx


--
Sustainable and modern Infrastructures by ungleich.ch
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email
to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux