Re: Ceph Octopus rbd images stuck in trash

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

just wondering if you're looking in the right pool(s)? The default pool is "rbd", are those images you listed from the "rbd" pool? Do you use an alias for the "rbd" command? If that's not it maybe increase rbd client debug logs to see where it goes wrong. From time to time I also have to clean up some orphans from the trash, but I believe I was able to restore the images from trash before looking for watchers. But just in case you get there, with 'rbd status <pool>/<image>' you should see if there is a watcher which you can blacklist then, then clean up snapshots etc.

Regards,
Eugen

Zitat von Jeff Welling <real.jeff.welling@xxxxxxxxx>:

Hello there,

I'm running Ceph 15.2.17 (Octopus) on Debian Buster and I'm starting an upgrade but I'm seeing a problem and I wanted to ask how best to proceed in case I make things worse by mucking with it without asking experts.

I've moved an rbd image to the trash without clearing the snapshots first, and then tried to 'trash purge'. This resulted in an error because the image still has snapshots, but I'm unable to remove the image from the pool to clear the snapshots either. At least one of these images is from a clone of a snapshot from another trashed image, which I'm already kicking myself for.

The contents of my trash:

# rbd trash ls
07afadac0ed69c nfsroot_pi08
240ae5a5eb3214 bigdisk
7fd5138848231e nfsroot_pi01
f33e1f5bad0952 bigdisk2
fcdeb1f96a6124 raspios-64bit-lite-manuallysetup-p1
fcdebd2237697a raspios-64bit-lite-manuallysetup-p2
fd51418d5c43da nfsroot_pi02
fd514a6b4d3441 nfsroot_pi03
fd515061816c70 nfsroot_pi04
fd51566859250b nfsroot_pi05
fd5162c5885d9c nfsroot_pi07
fd5171c27c36c2 nfsroot_pi09
fd51743cb8813c nfsroot_pi10
fd517ad3bc3c9d nfsroot_pi11
fd5183bfb1e588 nfsroot_pi12


This is the error I get trying to purge the trash:

# rbd trash purge
Removing images: 0% complete...failed.
rbd: some expired images could not be removed
Ensure that they are closed/unmapped, do not have snapshots (including trashed snapshots with linked clones), are not in a group and were moved to the trash successfully.


This is the error when I try and restore one of the trashed images:

# rbd trash restore nfsroot_pi08
rbd: error: image does not exist in trash
2023-01-11T12:28:52.982-0800 7f4b69a7c3c0 -1 librbd::api::Trash: restore: error getting image id nfsroot_pi08 info from trash: (2) No such file or directory

Trying to restore other images gives the same error.

These trash images are now taking up a significant portion of the cluster space. One thought was to upgrade and see if that resolves the problem, but I've shot myself in the foot doing that in the past without confirming it would solve the problem, so I'm looking for a second opinion on how best to clear these?

These are all Debian Buster systems, the kernel version of the host I'm running these commands on is:

Linux zim 4.19.0-8-amd64 #1 SMP Debian 4.19.98-1+deb10u1 (2020-04-27) x86_64 GNU/Linux

I'm going to be upgrading that too but one step at a time.
The exact ceph version is:

ceph version 15.2.17 (8a82819d84cf884bd39c17e3236e0632ac146dc4) octopus (stable)

This was installed from the ceph repos, not the debian repos, using cephadm. If there's any additional details I can share please let me know, any and all thoughts welcome! I've been googling and have found folks with similar issues but nothing similar enough to feel helpful.

Thanks in advance, and thank you to any and everyone who contributes to Ceph, it's awesome!
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux