Re: Ceph remote disaster recovery at PB scale

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Eugen,

On 4/6/22 09:47, Eugen Block wrote:
> I don't mean to hijack this thread, I'm just curious about the  
> multiple mirror daemons statement. Last year you mentioned that  
> multiple daemons only make sense if you have different pools to mirror  
> [1], at leat that's how I read it, you wrote:
> 
>> [...] but actually you can have multiple rbd-mirror daemons per  
>> cluster. It's the number of peers that are limited to one remote  
>> peer per pool. So technically if using different pools, you should  
>> be able to have three clusters connected as long as you have only  
>> one remote peer per pool. I never tested it though...
>> [...] For further multi-peer support, I am currently working on  
>> adding support for it!
> 
> What's the current status on this? In the docs I find only a general  
> statement that pre-Luminous you only could have one rbd mirror daemon  
> per cluster.

Sorry if my last year message was confusing... I was talking about
adding multiple clusters as peer, so essentially doing `rbd mirror pool
peer add [...]` (or similar) on the same pool and cluster multiple times
which is still not possible in any stable version now (still progressing
on the matter in a PR upstream, I still have a few bugs but it mostly
works).

But indeed, yes you can launch multiple rbd-mirror daemons quite easily.
If you launch multiple rbd-mirror daemons on the same cluster, they will
elect a leader among themselves and then the leader will try to maintain
a equal amount of image each deamon handle. And there is no special
trick about distributing the work in multiple pools, each daemon should
handle images on all the pools where rbd replication is enabled.

You can see who is the leader etc with `rbd mirror pool status
--verbose`, for instance on one of our cluster:

```
$ rbd mirror pool status --verbose barn-mirror
[...]
DAEMONS
service 149710145:
  instance_id: 149710151
  client_id: barn-rbd-mirror-b
  hostname: barn-rbd-mirror-b.cern.ch
  version: 15.2.xx
  leader: false
  health: OK

service 149710160:
  instance_id: 149710166
  client_id: barn-rbd-mirror-c
  hostname: barn-rbd-mirror-c.cern.ch
  version: 15.2.xx
  leader: false
  health: OK

service 149781483:
  instance_id: 149710136
  client_id: barn-rbd-mirror-a
  hostname: barn-rbd-mirror-a.cern.ch
  version: 15.2.xx
  leader: true
  health: OK
[...]
```

Cheers,

-- 
Arthur Outhenin-Chalandre
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux