Re: OSD META Capacity issue of rgw ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You may be facing this "BlueFS files take too much space" bug [1]. Have a look at the figures in the PR [2]. 

Fix hasn't been merged to Reef yet. 

Regards, 
Frédéric. 

[1] [ https://tracker.ceph.com/issues/68385 | https://tracker.ceph.com/issues/68385 ] 
[2] [ https://github.com/ceph/ceph/pull/60158 | https://github.com/ceph/ceph/pull/60158 ] 

----- Le 8 Nov 24, à 5:51, Jaemin Joo <jm7.joo@xxxxxxxxx> a écrit : 

> Thank you for your response.
> I will test it and left the result.

> How about the bluestore_rocksdb_options ?
> I don't understand why rocksdb has a lot of log data in sst file permanently
> except file metadata.
> I think some of log data needs to be rotated and left recent log data.
> I found rocksdb options which is related to log file like log_file_time_to_roll,
> keep_log_file_num, log_file_time_to_roll.
> Is it possible to change rocksdb options for META data reduction?

> 2024년 11월 8일 (금) 오전 12:53, Frédéric Nass < [
> mailto:frederic.nass@xxxxxxxxxxxxxxxx | frederic.nass@xxxxxxxxxxxxxxxx ] >님이
> 작성:

>> Hi,

>> You could give rocksdb compression a try. It's safe to use since Pacific and
>> it's now enabled by default in Squid:

>> $ ceph config set osd bluestore_rocksdb_options_annex
>> 'compression=kLZ4Compression'

>> Restart all OSDs and compact them twice. You can check db_used_bytes before and
>> after enabling compression with:

>> $ ceph tell osd.x perf dump bluefs | grep -E "db_"

>> Regards,
>> Frédéric.

>> ----- Le 7 Nov 24, à 16:07, Jaemin Joo [ mailto:jm7.joo@xxxxxxxxx |
>> jm7.joo@xxxxxxxxx ] a écrit :

>> > Hi All,

>> > I have RADOS gw(rgw) ceph cluster (v18.2.1) applied Erasure Coding and
>> > 10009 shards to save a lot of objects in a bucket
>> > root:/home/ceph# ceph osd df
>> > ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META
>> > AVAIL %USE VAR PGS STATUS
>> > 0 hdd 0.00490 1.00000 5.0 GiB 318 MiB 58 MiB 1 KiB 260 MiB
>> > 4.7 GiB 6.21 0.94 143 up
>> > 1 hdd 0.00490 1.00000 5.0 GiB 330 MiB 56 MiB 2 KiB 274 MiB
>> > 4.7 GiB 6.45 0.98 155 up
>> > 2 hdd 0.00490 1.00000 5.0 GiB 372 MiB 60 MiB 1 KiB 313 MiB
>> > 4.6 GiB 7.28 1.10 148 up

>> > (The osds are just data pool, it's not index pool for rgw..I think a lot of
>> > shards are not impact data pool META size)
>> > the OSD META occupied same as data size. I know that OSD META is rocksdb. I
>> > think oversized even though rocksdb META is needed.
>> > I can use osd compact to decrease META size. but I want to know why OSD
>> > META size is big than I expect,
>> > Is there way to decrease the META size by changing parameter instead of osd
>> > compact?
>> > I guess that some of "bluestore_rocksdb_options" can impact the META size.

>> > And let me know if you need the other parameter of ceph cluster to figure
>> > out the symptom.
>> > _______________________________________________
>> > ceph-users mailing list -- [ mailto:ceph-users@xxxxxxx | ceph-users@xxxxxxx ]
>>> To unsubscribe send an email to [ mailto:ceph-users-leave@xxxxxxx |
>> > ceph-users-leave@xxxxxxx ]
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux