Re: Multisite RGW - Large omap objects related with bilogs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear All


We have the same question here, if anyone can help ... Thank you!


Cheers

Francois




From: ceph-users on behalf of P. O. <posdub@xxxxxxxxx>
Sent: Friday, August 9, 2019 11:05 AM
To: ceph-users@xxxxxxxxxxxxxx
Subject: [ceph-users] Multisite RGW - Large omap objects related with bilogs
 
Hi all,

I have two ceph clusters in RGW multisite environment, with ~1500 bucketes ( 500M objects, 70TB ).
Some of the buckets are very dynamic (objects are constantly changing).

I have problems with large omap objects in bucket indexes, related with "dynamic buckets".

For example:
[root@rgw ~]# radosgw-admin bucket stats --bucket bucket_s3d33 |grep num_objects
"num_objects": 564

In /var/log/ceph/ceph.log:
cluster [WRN] Large omap object found. Object: 10:297646ca:::.dir.86a05ec8-9982-429b-9f94-28363610a95c.12546d0.17892:head Key count: 5307523 Size (bytes): 748792509

I found, this is because of bucket index logs:

[root@rgw-1 ~]# rados -p default.rgw.buckets.index listomapkeys .dir.86a05ec8-9982-429b-9f94-28363610a95c.12546d0.17892 | wc -l
5307523

There are a lot of keys:
�0_00013758656.71188336.4
�0_00013758657.71188337.5
�0_00013758658.71188338.4
�0_00013758659.71188339.5
�0_00013758660.71188342.4
�0_00013758661.71188343.5
�0_00013758662.71188344.4

[root@rgw-1 ~]# radosgw-admin bilog list --bucket bucket_s3d33 --max-entries 6000000 |grep op_id | wc -l
5307523


I have configured parameters in my ceph.conf:
rgw sync log trim concurrent buckets = 32
rgw sync log trim max buckets = 64
rgw sync log trim interval = 1200
rgw sync log trim min cold buckets = 4

But from two weeks, the omap key count is still growing.

How can I safely clean these bilogs (with no bucket damage and no replication damage)?


I found two radosgw-admin parameters related with bilogs trimming:

1) radosgw-admin bilog trim --bucket=bucket_s3d33 --start-marker XXXX --end-marker YYYY
I dont know what values should be in: --start-marker XXXX --end-marker YYYY.
Is it safe to use "bilog trim" on bucket with replication in progress? If yes, should i run this on both sites?

2) radosgw-admin bilog autotrim
Is this command safe? Can I use autotrim on selected bucket?

Maybe there is some other way, to delete bilogs?


Best regards,
P.O.
<< ATT00001.txt (0.4KB) (0.4KB) >>

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux