OSD META Capacity issue of rgw ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All,

I have RADOS gw(rgw) ceph cluster (v18.2.1) applied Erasure Coding and
10009 shards to save a lot of objects in a bucket
root:/home/ceph# ceph osd df
ID  CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA     OMAP    META
AVAIL    %USE  VAR   PGS  STATUS
0    hdd  0.00490   1.00000  5.0 GiB  318 MiB   58 MiB   1 KiB  260 MiB
 4.7 GiB  6.21  0.94  143      up
1    hdd  0.00490   1.00000  5.0 GiB  330 MiB   56 MiB   2 KiB  274 MiB
 4.7 GiB  6.45  0.98  155      up
2    hdd  0.00490   1.00000  5.0 GiB  372 MiB   60 MiB   1 KiB  313 MiB
 4.6 GiB  7.28  1.10  148      up

(The osds are just data pool, it's not index pool for rgw..I think a lot of
shards are not impact data pool META size)
the OSD META occupied same as data size. I know that OSD META is rocksdb. I
think oversized even though rocksdb META is needed.
I can use osd compact to decrease META size. but I want to know why OSD
META size is big than I expect,
Is there way to decrease the META size by changing parameter instead of osd
compact?
I guess that some of "bluestore_rocksdb_options" can impact the META size.

And let me know if you need the other parameter of ceph cluster to figure
out the symptom.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux