Re: RBD migration between 2 EC pools : very slow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



вт, 22 июн. 2021 г. в 23:22, Gilles Mocellin <gilles.mocellin@xxxxxxxxxxxxxx>:
>
> Hello Cephers,
>
>
> On a capacitive Ceph cluster (13 nodes, 130 OSDs 8To HDD), I'm migrating a 40
> To image from a 3+2 EC pool to a 8+2 one.
>
> The use case is Veeam backup on XFS filesystems, mounted via KRBD.
>
>
> Backups are running, and I can see 200MB/s Throughput.
>
>
> But my migration (rbd migrate prepare / execute) is staling at 4% for 6h now.
>
> When the backups are not running, I can see a little 20MB/s of throughput,
> certainly my migration.
>
>
> I need a month to migrate 40 to at that speed !
>
>
> As I use a KRBD client, I cannot remap the rbd image straight after the rbd
> prepare. So the filesystem is not usable until the migration is completed.
>
> Not really usable for me...
>
>
> Is anyone has a clue either to speed up the rbd migration, or another method
> to move/copy an image between 2 pools, with the minimum downtime ?
>
>
> I thought of rbd export-diff | rbd import-diff, while mounted, and another
> unmapped before switching...
>
> But, it forces me to rename my image, because if I use another data pool, the
> metadata pool stays the same.
>
>
> Can you see another method ?

I suggest that you cancel the migration and don't ever attempt it
again because big EC setups are very easy to overload with IOPS.

When I worked at croit GmbH, we had a very unhappy customer with
almost the same setup as you are trying to achieve: Veeam Backup, XFS
on rbd on a 8+3 EC pool of HDDs. Their complaint was that both the
backup and restore were extremely slow, ~3 MB/s, and with 200 ms of
latency, but I would call their cluster overloaded due to too many
concurrent backups. We tried, unsuccessfully, to tune their setup, but
our final recommendation (successfully benchmarked but rejected due to
costs) was to create a separate replica 3 pool for new backups.

-- 
Alexander E. Patrakov
CV: http://u.pc.cd/wT8otalK
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux