Re: RBD-Mirror Snapshot Scalability

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jan 21, 2021 at 2:00 PM Adam Boyhan <adamb@xxxxxxxxxx> wrote:
>
> Looks like a script and cron will be a solid work around.
>
> Still interested to know if there are any options to make it so rbd-mirror can take more than 1 mirror snap per second.
>
>
>
> From: "adamb" <adamb@xxxxxxxxxx>
> To: "ceph-users" <ceph-users@xxxxxxx>
> Sent: Thursday, January 21, 2021 11:18:36 AM
> Subject:  RBD-Mirror Snapshot Scalability
>
> I have noticed that RBD-Mirror snapshot mode can only manage to take 1 snapshot per second. For example I have 21 images in a single pool. When the schedule is triggered it takes the mirror snapshot of each image 1 at a time. It doesn't feel or look like a performance issue as the OSD's are Micron 9300 PRO NVMe's and each server has 2x Intel Platinum 8268 CPU's.

The creation of snapshot ids is limited by the MONs quorum process. It
can issue multiple ids in a single batch, but they all need to be
queued up. The most recent version of the MGR's RBD mirror snapshot
scheduler works asynchronously so it can start multiple snapshots
concurrently. It's much better but it still won't scale to hundreds of
snapshots per second (*not that your cluster could even keep up
regardless even if the MONs could).

> I was hoping that adding more RDB-Mirror instance would help, but that only seems to help with overall throughput. As it sits I have 3 RBD-Mirror instances running on each cluster.
>
> We run a 30 minute snapshot schedule to our remote site as it is, based on that I can only squeeze 1800 mirror snaps every 30 minutes.

Honestly, you might be at the bleeding edge here with an attempt to
replicate >1,800 images. Getting feedback from deployments like yours
can help us improve the software since we, realistically, don't have
the compute resources to easily test at large scale.

> I was hoping there might be something I am missing with RBD-Mirror as far as scaling goes.
>
> Maybe multiple pools would be a solution and have other benefits?
>
>
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>


-- 
Jason
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux