Re: RBD migration between 2 EC pools : very slow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Le 2021-06-22 20:21, Gilles Mocellin a écrit :
Hello Cephers,


On a capacitive Ceph cluster (13 nodes, 130 OSDs 8To HDD), I'm migrating a 40
To image from a 3+2 EC pool to a 8+2 one.

The use case is Veeam backup on XFS filesystems, mounted via KRBD.


Backups are running, and I can see 200MB/s Throughput.


But my migration (rbd migrate prepare / execute) is staling at 4% for 6h now.

When the backups are not running, I can see a little 20MB/s of throughput,
certainly my migration.


I need a month to migrate 40 TB at that speed !

Hello,

It seems worst, this morning, I still see a 4% completed status.
The metrics I can see other the last 12h are not showing much activity...
So it seems more stale than slow.

Is anyone using that rbd migration command, on reasonable sized image ? And of course between different data pools ?

During my initial tests, it worked, but with little and quite empty test images (1 TB)...

I will investigate an alternate method, like :
- snapshot src image
- copy src image to new data pool (new image name)
- unmount src image
- export-diff | import-diff delta from src image with snap to dest image
- test mount dest image
- delete src image
- rename dest image

The unknown duration here is for the export/import diffs. But during the initial copy, my production can continue. Anyon has already done something similar and can comment on the duration, or pinpoints ?

--
Gilles
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux