Re: rbd-mirror questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you both for the detailed answers...this gives me a starting point to work from!

Shain

Sent from my iPhone

> On Aug 5, 2016, at 8:25 AM, Jason Dillaman <jdillama@xxxxxxxxxx> wrote:
> 
>> On Fri, Aug 5, 2016 at 3:42 AM, Wido den Hollander <wido@xxxxxxxx> wrote:
>> 
>>> Op 4 augustus 2016 om 18:17 schreef Shain Miley <smiley@xxxxxxx>:
>>> 
>>> 
>>> Hello,
>>> 
>>> I am thinking about setting up a second Ceph cluster in the near future,
>>> and I was wondering about the current status of rbd-mirror.
>> 
>> I don't have all the answers, but I will give it a try.
>> 
>>> 1)is it production ready at this point?
>> 
>> Yes, but rbd-mirror is a single process at the moment. So mirroring a very large number of images might become a bottleneck at some point. I don't know where that point it.
> 
> Production ready could mean different things to different people. We
> haven't had any reports of data corruption or similar issues. The
> forthcoming 10.2.3 release will include several rbd-mirror daemon
> stability and performance improvements (especially in terms of memory
> usage) that were uncovered during heavy stress testing beyond our
> normal automated test cases.
> 
> It is not currently HA nor horizontally scalable, but we have a design
> blueprint in place to start addressing this for the upcoming Kraken
> release. It is also missing a "deep scrub"-like utility to
> periodically verify that your replicated images match your primary
> images, which I am hoping to include in the Luminous release. Finally,
> we are tweaking for performance issues with the default journal
> settings, but in the meantime setting the
> "rbd_journal_object_flush_age" config setting to a non-zero value (in
> seconds), will improve IOPS noticeably when journaling is enabled.
> 
>>> 2)can it be used when you have a cluster with existing data in order to
>>> replicate onto a new cluster?
>> 
>> iirc, images need the fast-diff feature enable to be able to support mirroring, more on that in the docs: http://docs.ceph.com/docs/master/rbd/rbd-mirroring/
>> 
>> The problem is, if you have old RBD images, maybe even format 1, you will not be able to mirror those.
>> 
>> Some rbd format 2 images neither, since they don't have the journal and don't have the fast-diff.
>> 
>> So per image it will depend if the mirroring is able to run.
> 
> Yes, it will automatically "bootstrap" existing images to the new
> cluster by performing a full, deep copy of the images. The default
> setting is to synchronize a maximum of 5 images concurrently, but for
> huge images you may want to tweak that setting down. This requires
> only the exclusive-lock and journaling features on the images -- which
> can be dynamically enabled/disabled on existing v2 images if needed.
> 
>>> 3)we have some rather large rbd images at this point..several in the
>>> 90TB range...would there be any concern using rbd-mirror given the size
>>> of our images?
>> 
>> The initial sync might be slow and block the single rbd-mirror process. Afterwards, if fast-diff is enabled it shouldn't be a real problem.
> 
> Agreed -- the initial sync will take the longest. By default it copies
> up to 10 backing object blocks concurrently for each syncing image,
> but if your cluster has enough capacity you can adjust that up using
> the "rbd_concurrent_management_ops" config setting to increase the
> transfer throughput. While the initial sync is in-progress, the
> journal will continue to grow since the remote rbd-mirror process
> won't be able to replay events until after the sync is complete.
> 
> As with any new feature or release of Ceph, I would recommend first
> playing around with it on non-production workloads. Since RBD
> mirroring is configured on a per-pool and per-image basis, there is a
> potentially lower barrier for testing.
> 
>> Wido
>> 
>>> Thanks,
>>> 
>>> Shain
>>> 
>>> --
>>> NPR | Shain Miley | Manager of Infrastructure, Digital Media |
>>> smiley@xxxxxxx | 202.513.3649
>>> 
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> 
> -- 
> Jason
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux