Re: Ceph 18: Unable to delete image after imcomplete migration "image being migrated"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Eugen. Operation complete:

root@hcn03:/imagework# ceph osd pool delete infra-pool infra-pool --yes-i-really-really-mean-it
pool 'infra-pool' removed

Everything clean and tidy again.

Thanks for your help and support. 

------- Original Message -------
On Wednesday, October 11th, 2023 at 7:21 PM, Eugen Block <eblock@xxxxxx> wrote:


> Hi,
> 
> then I misinterpreted your message and thought you were actually
> surprised about the trash image. Yeah I don't think messing with
> hexedit really helped here, but I'm not sure either. Anyway, let us
> know how it went.
> 
> Zitat von Rhys Goodwin rhys.goodwin@xxxxxxxxx:
> 
> > Thanks again Eugen. Looking at my command history it does look like
> > I did execute the migration but didn't commit it. I wasn't surprised
> > to see it in the trash based on the doc you mentioned, I only tried
> > the restore as a desperate measure to clean up my mess. It doesn't
> > help that I messed around like this, including with hexedit :O. I
> > should have reached out before messing around.
> > 
> > I'll proceed with the migrate/re-create and report back. I'm just
> > crossing my fingers that I'll be allowed to delete the pool. It's a
> > lesson to me to take more care of my wee cluster.
> > 
> > Cheers,
> > Rhys
> > 
> > ------- Original Message -------
> > On Wednesday, October 11th, 2023 at 7:54 AM, Eugen Block
> > eblock@xxxxxx wrote:
> > 
> > > Hi,
> > > 
> > > I just re-read the docs on rbd migration [1], haven't done that in a
> > > while, and it states the following:
> > > 
> > > > Note that the source image will be moved to the RBD trash to avoid
> > > > mistaken usage during the migration process
> > > 
> > > So it was expected that your source image was in the trash during the
> > > migration, no need to restore. According to your history you also ran
> > > the "execute" command, do you remember if ran successfully as well?
> > > Did you "execute" after the prepare command completed? But you also
> > > state that the target image isn't there anymore, so it's hard to tell
> > > what exactly happened here. I'm not sure how to continue from here,
> > > maybe migrating/re-creating is the only way now.
> > > 
> > > [1] https://docs.ceph.com/en/quincy/rbd/rbd-live-migration/
> > > 
> > > Zitat von Rhys Goodwin rhys.goodwin@xxxxxxxxx:
> > > 
> > > > Thanks Eugen.
> > > > 
> > > > root@hcn03:~# rbd status infra-pool/sophosbuild
> > > > 2023-10-10T09:44:21.234+0000 7f1675c524c0 -1 librbd::Migration:
> > > > open_images: failed to open destination image images/65d188c5f5a34:
> > > > (2) No such file or directory
> > > > rbd: getting migration status failed: (2) No such file or directory
> > > > Watchers: none
> > > > 
> > > > I've checked over the other pools again, but they only contain
> > > > Openstack images. There are only 42 images in total across all
> > > > pools. In fact, the "infra-pool" pool only has 3 images, including
> > > > the faulty one. So migrating/re-creating is not a big deal. It's
> > > > more just that I'd like to learn more about how to resolve such
> > > > issues, if possible.
> > > > 
> > > > Good call on the history. I found this smoking gun with: 'history
> > > > |grep "rbd migration":
> > > > rbd migration prepare infra-pool/sophosbuild images/sophosbuild
> > > > rbd migration execute images/sophosbuild
> > > > 
> > > > But images/sophosbuild is definitely not there anymore, and not in
> > > > the trash. It looks like I was missing the commit.
> > > > 
> > > > Kind regards,
> > > > Rhys
> > > > 
> > > > ------- Original Message -------
> > > > 
> > > > Eugen Block Wrote:
> > > > 
> > > > Hi, there are a couple of things I would check before migrating all
> > > > images. What's the current 'rbd status infra-pool/sophosbuild'? You
> > > > probably don't have an infinite number of pools so I would also
> > > > check if any of the other pools contains an image with the same
> > > > name, just in case you wanted to keep its original name and only
> > > > change the pool. Even if you don't have the terminal output, maybe
> > > > you find some of the commands in the history?
> > > > _______________________________________________
> > > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > > 
> > > _______________________________________________
> > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > > _______________________________________________
> > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> 
> 
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux