rbd live migration recovery

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Hello,

I was playing around with rbd migration and I just happened to interrupt the prepare step. That is I hit ctr-c while rbd migration prepare was running.

Now I am left with a half baked migration target (it has half the source snapshots) and a migration source which sits in trash.

Aborting the migration results in error:

root@pe2950-1:~# rbd migration abort VMpoolEC/vm-206087-disk-1 2022-07-09T20:19:47.474+0300 7f52b4cc1340 -1 librbd::Migration: open_images: failed retrieving migration header: (22) Invalid argument Abort image migration: 0% complete...failed.

Restoring from trash results in error as well:

root@pe2950-1:~# rbd trash ls VMpool --all
892bd7086688e vm-206087-disk-1
root@pe2950-1:~# rbd trash restore VMpool/892bd7086688e
rbd: restore error: 2022-07-09T20:29:17.596+0300 7f301643e340 -1 librbd::api::Trash: restore: Current trash source 'migration' does not match expected: user,mirroring,unknown (4)(22) Invalid argument

I can't seem to find a way out of this situation in the docs. Is there something I can do? The cluster is for testing and the data can be discarded but it would be good to know if interrupting a step during rbd migration is a huge no-no.

Thanks in advance,

-Kostas
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux