Re: Ceph remote disaster recovery at PB scale

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I don't mean to hijack this thread, I'm just curious about the multiple mirror daemons statement. Last year you mentioned that multiple daemons only make sense if you have different pools to mirror [1], at leat that's how I read it, you wrote:

[...] but actually you can have multiple rbd-mirror daemons per cluster. It's the number of peers that are limited to one remote peer per pool. So technically if using different pools, you should be able to have three clusters connected as long as you have only one remote peer per pool. I never tested it though... [...] For further multi-peer support, I am currently working on adding support for it!

What's the current status on this? In the docs I find only a general statement that pre-Luminous you only could have one rbd mirror daemon per cluster.

2) Should i use parallel rbd mirroring daemons to speed up the sync process? Or a single daemon would be sufficient?

Depends on the number of writes you have :). But well yes, rbd-mirror
essentially talk between themselves and distribute the work. Note that
this not a very smart work sharing, it tries to balance the number of
rbd image each daemon handle. So this essentially means that you could
technically have all your really busy image on one daemon for example.

This seems to contradict my previous understanding, so apparently you can have multiple daemons per cluster and they spread the load among themselves independent of the pools? I'd appreciate any clarification.

Thanks!
Eugen

[1] https://www.spinics.net/lists/ceph-users/msg68736.html

Zitat von Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@xxxxxxx>:

Hi,

On 4/1/22 10:56, huxiaoyu@xxxxxxxxxxxx wrote:
1) Rbd mirroring with Peta bytes data is doable or not? are there any practical limits on the size of the total data?

So the first thing that matter with rbd replication is the amount of
data you write if you have a PB that mostly don't change your
replication would be mostly idle...

That being said the limit are mostly the reads that you can afford for
replication on the source cluster and the writes on your target
clusters. After that there is how much rbd-mirror can output but
theoretically you can scale the number of rbd-mirror you have if this is
a bottleneck.

2) Should i use parallel rbd mirroring daemons to speed up the sync process? Or a single daemon would be sufficient?

Depends on the number of writes you have :). But well yes, rbd-mirror
essentially talk between themselves and distribute the work. Note that
this not a very smart work sharing, it tries to balance the number of
rbd image each daemon handle. So this essentially means that you could
technically have all your really busy image on one daemon for example.

Even if one would be sufficient for you, I would put at least two for
redundancy.

3) What could be the lagging time at the remote site? at most 1 minutes or 10 minutes?

It depends on the mode, with journal it's how much entry in the journal
you lag behind. With snapshots, it depends on your interval between
snapshots and the time you take to write the diff to the target cluster
essentially.

In my setup, when I tested  the journal mode I noticed a significantly
slower replication than what the snapshot mode have. I would encourage
you to read this set of slides that I presented last year at Ceph Month
June: https://codimd.web.cern.ch/p/-qWD2Y0S9#/. Feel free to test the
journal mode in your setup and report back to the list though, it could
be very interesting!

Cheers,

--
Arthur Outhenin-Chalandre
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux