Re: Ceph remote disaster recovery at PB scale

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On 4/1/22 10:56, huxiaoyu@xxxxxxxxxxxx wrote:
> 1) Rbd mirroring with Peta  bytes data is doable or not? are there any practical limits on the size of the total data? 

So the first thing that matter with rbd replication is the amount of
data you write if you have a PB that mostly don't change your
replication would be mostly idle...

That being said the limit are mostly the reads that you can afford for
replication on the source cluster and the writes on your target
clusters. After that there is how much rbd-mirror can output but
theoretically you can scale the number of rbd-mirror you have if this is
a bottleneck.

> 2) Should i use parallel rbd mirroring daemons to speed up the sync process? Or a single daemon would be sufficient?

Depends on the number of writes you have :). But well yes, rbd-mirror
essentially talk between themselves and distribute the work. Note that
this not a very smart work sharing, it tries to balance the number of
rbd image each daemon handle. So this essentially means that you could
technically have all your really busy image on one daemon for example.

Even if one would be sufficient for you, I would put at least two for
redundancy.

> 3) What could be the lagging time at the remote site? at most 1 minutes or 10 minutes?

It depends on the mode, with journal it's how much entry in the journal
you lag behind. With snapshots, it depends on your interval between
snapshots and the time you take to write the diff to the target cluster
essentially.

In my setup, when I tested  the journal mode I noticed a significantly
slower replication than what the snapshot mode have. I would encourage
you to read this set of slides that I presented last year at Ceph Month
June: https://codimd.web.cern.ch/p/-qWD2Y0S9#/. Feel free to test the
journal mode in your setup and report back to the list though, it could
be very interesting!

Cheers,

-- 
Arthur Outhenin-Chalandre
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux