Re: Orphaned rbd_data Objects

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Frederic,

thanks for you suggestions. I took a look at all the objects and discovered the following:

1. rbd_id.<image_name>, rbd_header.<image_id> and rbd_object_map.<image_id> exist only for the 3 images listed by ‚rbd ls‘ (i created a test image yesterday).

2. There are about 30 different image_ids with rbd_data.<image_id> Objects which do not have an id, header oder object_map object

After that i tried to get the stats of one of the orphaned objects:

rados -p mailbox stat rbd_data.26f7c5d05af621.0000000000002adf
 error stat-ing mailbox/rbd_data.26f7c5d05af621.0000000000002adf: (2) No such file or directory
 
I double checked, that object name is the one listed by 'rados ls‘. What makes it worse is that  i neither can stat, get or rm the objects while they still are counted for disk usage. We will remove the whole pool for sure, but i really like to get to the cause of this to prevent it from happening again.

Am 30.01.2025 um 10:54 schrieb Frédéric Nass <frederic.nass@xxxxxxxxxxxxxxxx>:

Hi Felix,

Every rbd_data object belongs to an image that should have :

- an rbd_id.<image_name> object containing the image id (that you can get with rbd info)
- an rbd_header object with omap atrributes you can list with listomapvals

To identify the image name these rbd_data object belong(ed) to you could list all rbd_id objects in that pool and, for each one of them, print the image id and the image name with the below command:

$ for rbd_id in $(rados -p $poolname ls | grep rbd_id) ; do echo "$(echo $rbd_id | cut -d '.' -f2) : $(rados -p $poolname get $rbd_id - | strings)" ; done
image2 : 2a733b30debc84
image1 : 28d4fc1dddd922

It might take some time but you'd get a clearer view of what these rbd_data objects refer(ed) to. Also, if you can decode the timestamps in 'rados -p rbd listomapvals rbd_header.<id> | strings' output, you could know when each image was created and accessed for the last time.

Hope that helps.

Regards,
Frédéric.

PS: If you're moving away from iSCSI and only have 2 remaining images in this pool, you may also wait until these images are no longer in use and then detach them and remove the whole pool.

----- Le 30 Jan 25, à 9:09, Felix Stolte <f.stolte@xxxxxxxxxxxxx> a écrit :
Hi Frederic,
there is no namespace. The pool in question has the application rbd, but is not the default pool named ‚rbd'

Am 29.01.2025 um 11:24 schrieb Frédéric Nass <frederic.nass@xxxxxxxxxxxxxxxx>:

Hi Felix,

Any RADOS namespaces in that pool? You can check using either:

rbd namespace ls rbd

or

rados stat -p rbd rbd_namespace && rados -p rbd listomapvals rbd_namespace

The rbd_data objects might be linked to namespaced images that can only be listed using the command: rbd ls --namespace <namespace>
I suggest checking this because the 'rbd' pool has historically been Ceph's default RBD pool, long before iSCSI began using it (in its hardcoded implementation).

Might be worth checking this before taking any actions.

Regards,
Frédéric.

----- Le 29 Jan 25, à 8:53, Felix Stolte f.stolte@xxxxxxxxxxxxx a écrit :

Hi Alexander,

trash is empty and rbd ls only lists two images with the prefix
rbd_data.1af561611d24cf and rbd_data.ed93e6548ca56b

rados ls gives:

rbd_data.d1b81247165450.00000000000055d2
rbd_data.32de606b8b4567.0000000000012f2f
rbd_data.ed93e6548ca56b.00000000000eef03
rbd_data.26f7c5d05af621.0000000000002adf
….



Am 28.01.2025 um 22:46 schrieb Alexander Patrakov <patrakov@xxxxxxxxx>:

Hi Felix,

A dumb answer first: if you know the image names, have you tried "rbd
rm $pool/$imagename"? Or, is there any reason like concerns about
iSCSI control data integrity that prevents you from trying that?

Also, have you checked the rbd trash?

On Tue, Jan 28, 2025 at 5:43 PM Stolte, Felix <f.stolte@xxxxxxxxxxxxx> wrote:

Hi guys,

we have a rbd pool we used for images exported via ceph-iscsi on a 17.2.7
cluster. The pool uses 10 times the diskspace i would suppose it should and
after investigating we noticed a lot of rbd_data Objects which images are no
longer present. I assume that the original images  were deleted using the gwcli
but not all Objects have been removed properly.

What would be the best/most secure way to get rid of these orphaned objects and
reclaim the diskspace?


Best regards
Felix

---------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Stefan Müller
Geschäftsführung: Prof. Dr. Astrid Lambrecht (Vorsitzende),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr. Ir. Pieter Jansens
---------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



--
Alexander Patrakov

mit freundlichem Gruß
Felix Stolte

IT-Services

---------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Stefan Müller
Geschäftsführung: Prof. Dr. Astrid Lambrecht (Vorsitzende),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr. Ir. Pieter Jansens
---------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

mit freundlichem Gruß
Felix Stolte

IT-Services

---------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Stefan Müller
Geschäftsführung: Prof. Dr. Astrid Lambrecht (Vorsitzende),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr. Ir. Pieter Jansens
---------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------


mit freundlichem Gruß
Felix Stolte

IT-Services

---------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Stefan Müller
Geschäftsführung: Prof. Dr. Astrid Lambrecht (Vorsitzende),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr. Ir. Pieter Jansens
---------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux