RBD migration between 2 EC pools : very slow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Cephers,


On a capacitive Ceph cluster (13 nodes, 130 OSDs 8To HDD), I'm migrating a 40 
To image from a 3+2 EC pool to a 8+2 one.

The use case is Veeam backup on XFS filesystems, mounted via KRBD.


Backups are running, and I can see 200MB/s Throughput.


But my migration (rbd migrate prepare / execute) is staling at 4% for 6h now.

When the backups are not running, I can see a little 20MB/s of throughput, 
certainly my migration.


I need a month to migrate 40 to at that speed !


As I use a KRBD client, I cannot remap the rbd image straight after the rbd 
prepare. So the filesystem is not usable until the migration is completed.

Not really usable for me...


Is anyone has a clue either to speed up the rbd migration, or another method 
to move/copy an image between 2 pools, with the minimum downtime ?


I thought of rbd export-diff | rbd import-diff, while mounted, and another 
unmapped before switching...

But, it forces me to rename my image, because if I use another data pool, the 
metadata pool stays the same.


Can you see another method ?

--

Gilles

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux