Hello,
I still have large OMAP objects since a year.
These objects are probably from an ancient bucket that has been removed.
So I cannot use bilog trim. Depp-scrub dos nothing.
Also, even if I don't have a huge cluster (my Object Storage pools is
only arounde 10TB), the rgw-orphan-list is too long to run.
So, As I have only 6 Large OMAP objects (just above 200000 keys), I
would like to find and remove the orphan rados objects.
If someone can tell me if I'm right in my assumptions ?
Let's take that log :
2023-06-09T00:51:10.222449+0200 osd.66 (osd.66) 12 : cluster [WRN] Large
omap object found. Object:
3:56f3a469:::.dir.aefd4003-1866-4b16-b1b3-2f308075cd1c.8348421.2.9:head
PG: 3.9625cf6a (3.2a) Key count: 200304 Size (bytes): 58606170
I deduce that the bucket-id is :
aefd4003-1866-4b16-b1b3-2f308075cd1c.8348421.2
I cannot find a bucket name with that ID in the metadata list :
# radosgw-admin metadata list --metadata-key bucket.instance | grep -i
aefd4003-1866-4b16-b1b3-2f308075cd1c.8348421.2
returns nothing.
If I list all objects begining with that bucket ID, I find :
gmo_admin@fidcl-mrs4-sto-sds-01:~$ sudo rados -p MRS4.rgw.buckets.index
ls | grep aefd4003-1866-4b16-b1b3-2f308075cd1c.8348421.2 | cat
.dir.aefd4003-1866-4b16-b1b3-2f308075cd1c.8348421.2.25
.dir.aefd4003-1866-4b16-b1b3-2f308075cd1c.8348421.2.0
.dir.aefd4003-1866-4b16-b1b3-2f308075cd1c.8348421.2.15
.dir.aefd4003-1866-4b16-b1b3-2f308075cd1c.8348421.2.17
.dir.aefd4003-1866-4b16-b1b3-2f308075cd1c.8348421.2.9
.dir.aefd4003-1866-4b16-b1b3-2f308075cd1c.8348421.2.26
.dir.aefd4003-1866-4b16-b1b3-2f308075cd1c.8348421.2.7
.dir.aefd4003-1866-4b16-b1b3-2f308075cd1c.8348421.2.23
.dir.aefd4003-1866-4b16-b1b3-2f308075cd1c.8348421.2.2
.dir.aefd4003-1866-4b16-b1b3-2f308075cd1c.8348421.2.3
.dir.aefd4003-1866-4b16-b1b3-2f308075cd1c.8348421.2.16
.dir.aefd4003-1866-4b16-b1b3-2f308075cd1c.8348421.2.13
.dir.aefd4003-1866-4b16-b1b3-2f308075cd1c.8348421.2.22
.dir.aefd4003-1866-4b16-b1b3-2f308075cd1c.8348421.2.21
.dir.aefd4003-1866-4b16-b1b3-2f308075cd1c.8348421.2.4
.dir.aefd4003-1866-4b16-b1b3-2f308075cd1c.8348421.2.20
.dir.aefd4003-1866-4b16-b1b3-2f308075cd1c.8348421.2.19
.dir.aefd4003-1866-4b16-b1b3-2f308075cd1c.8348421.2.11
.dir.aefd4003-1866-4b16-b1b3-2f308075cd1c.8348421.2.27
.dir.aefd4003-1866-4b16-b1b3-2f308075cd1c.8348421.2.24
.dir.aefd4003-1866-4b16-b1b3-2f308075cd1c.8348421.2.28
.dir.aefd4003-1866-4b16-b1b3-2f308075cd1c.8348421.2.10
.dir.aefd4003-1866-4b16-b1b3-2f308075cd1c.8348421.2.14
.dir.aefd4003-1866-4b16-b1b3-2f308075cd1c.8348421.2.18
.dir.aefd4003-1866-4b16-b1b3-2f308075cd1c.8348421.2.1
.dir.aefd4003-1866-4b16-b1b3-2f308075cd1c.8348421.2.6
.dir.aefd4003-1866-4b16-b1b3-2f308075cd1c.8348421.2.8
.dir.aefd4003-1866-4b16-b1b3-2f308075cd1c.8348421.2.12
.dir.aefd4003-1866-4b16-b1b3-2f308075cd1c.8348421.2.5
Do you think I'm safe to delete them ? If they are all about a
non-existent bucket...
How can I be sure that an RGW index object and the omap keys it has, are
not used ?
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx