Re: Ceph 18: Unable to delete image after imcomplete migration "image being migrated"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


Thanks Eugen.

root@hcn03:~# rbd status infra-pool/sophosbuild
2023-10-10T09:44:21.234+0000 7f1675c524c0 -1 librbd::Migration: open_images: failed to open destination image images/65d188c5f5a34: (2) No such file or directory
rbd: getting migration status failed: (2) No such file or directory
Watchers: none

I've checked over the other pools again, but they only contain Openstack images. There are only 42 images in total across all pools. In fact, the "infra-pool" pool only has 3 images, including the faulty one. So migrating/re-creating is not a big deal. It's more just that I'd like to learn more about how to resolve such issues, if possible.

Good call on the history. I found this smoking gun with: 'history |grep "rbd migration":
rbd migration prepare infra-pool/sophosbuild images/sophosbuild
rbd migration execute images/sophosbuild

But images/sophosbuild is definitely not there anymore, and not in the trash. It looks like I was missing the commit.

Kind regards,

------- Original Message -------

Eugen Block Wrote:

Hi, there are a couple of things I would check before migrating all images. What's the current 'rbd status infra-pool/sophosbuild'? You probably don't have an infinite number of pools so I would also check if any of the other pools contains an image with the same name, just in case you wanted to keep its original name and only change the pool. Even if you don't have the terminal output, maybe you find some of the commands in the history?
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]

  Powered by Linux