Re: index object in shard begins with hex 80

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 Ok,
I fthink I igured this out. First, as I think I wrote earlier, these objects in the ugly namespace begin with "<80>0_0000", and as such are  a "bucket log index" file  according to the bucket_index_prefixes[] in cls_rgw.cc.
These objects were multiplying, and caused the 'Large omap object' warnings. Our users were creating *alot* of small objects.
We have a multi-site environment, with replication between the two sites for all buckets.
Recently, we had some inadvertent downtime on the slave zone side. Checking the bucket in question, the large omap warning ONLY showed up on the slave side. Turns out the bucket in question has expiration set of all objects after a few days. Since the date of the downtime, NO objects have been deleted on the slave side! Deleting the  'extra' objects on the slave side by hand, amd then running 'bucket sync init' on the bucket on both sides seems to have resolved the situation. But this may be a bug in data sync when the slave side is not available for a time.

-Chris

    On Tuesday, July 18, 2023 at 12:14:18 PM MDT, Dan van der Ster <dan.vanderster@xxxxxxxxx> wrote:  
 
 Hi Chris,
Those objects are in the so called "ugly namespace" of the rgw, used to prefix special bucket index entries.

// No UTF-8 character can begin with 0x80, so this is a safe indicator
// of a special bucket-index entry for the first byte. Note: although
// it has no impact, the 2nd, 3rd, or 4th byte of a UTF-8 character
// may be 0x80.
#define BI_PREFIX_CHAR 0x80

You can use --omap-key-file and some sed magic to interact with those keys, e.g. like this example from my archives [1].(In my example I needed to remove orphaned olh entries -- in your case you can generate uglykeys.txt in whichever way is meaningful for your situation.)

BTW, to be clear, I'm not suggesting you blindly delete those keys. You would need to confirm that they are not needed by a current bucket instance before deleting, lest some index get corrupted.

Cheers, Dan______________________________________________________
Clyso GmbH | Ceph Support and Consulting | https://www.clyso.com
[1] 
# radosgw-admin bi list --bucket=xxx --shard-id=0 >
xxx.bilist.0
# cat xxx.bilist.0 | jq -r '.[]|select(.type=="olh" and .entry.key.name=="") | .idx' > uglykeys.txt
# head -n2 uglykeys.txt
�1001_00/2a/002a985cc73a01ce738da460b990e9b2fa849eb4411efb0a4598876c2859d444/2018_12_11/2893439/3390300/metadata.gz
�1001_02/5f/025f8e0fc8234530d6ae7302adf682509f0f7fb68666391122e16d00bd7107e3/2018_11_14/2625203/3034777/metadata.gz

# cat do_remove.sh

# usage: "bash do_remove.sh | sh -x"
while read f;
do
    echo -n $f | sed 's/^.1001_/echo -n -e \\\\x801001_/'; echo ' > mykey && rados rmomapkey -p default.rgw.buckets.index .dir.zone.bucketid.xx.indexshardnumber --omap-key-file mykey';
done < uglykeys.txt




On Tue, Jul 18, 2023 at 9:27 AM Christopher Durham <caduceus42@xxxxxxx> wrote:

Hi,
I am using ceph 17.2.6 on rocky linux 8.
I got a large omap object warning today.
Ok, So I tracked it down to a shard for a bucket in the index pool of an s3 pool.

However, when lisitng the omapkeys with:
# rados -p pool.index listomapkeys .dir.zone.bucketid.xx.indexshardnumber
it is clear that the problem is caused by many omapkeys with the following name format:

<80>0_00004771163.3444695458.6
A hex dump of the output of the listomapkeys command above indicates that the first 'character' is indeed hex 80, but as there is no equivalent ascii for hex 80, I am not sure how to 'get at' those keys to see the values, delete them, etc. The index keys not of the format above appear to be fine, indicating s3 object names as expected.

The rest of the index shards for the bucket are reasonable and have less than  osd_deep_scrub_large_omap_object_key_threshold index objects , and the overall total of objects in the bucket is way less than osd_deep_scrub_large_omap_object_key_threshold*num_shards. 

These weird objects seem to be created occasionally.........????? Yes, the bucket is used heavily.

Any advice here?
-Chris




_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

  
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux