Hi, What you mean "strange"? It is normal, the object need only for OMAP data, not for actual data. Is only key for k,v database I see that you have lower number of objects, some of your PG don't have data at all. I suggest check your buckets for properly resharding process How this look (the index pool) usually: ``` [root@ceph-mon0 tools]# ./show_osd_pool_pg_usage.sh default.rgw.buckets.index id used_mbytes used_objects omap_used_mbytes omap_used_keys -- ----------- ------------ ---------------- -------------- 16.0 0 16 225.6920166015625 726530 16.1 0 12 159.67615509033203 540682 16.2 0 15 146.12717723846436 473920 16.3 0 13 166.71683406829834 562730 16.4 0 12 178.36385917663574 569170 16.5 0 14 155.09133911132812 460417 16.6 0 8 131.7033519744873 422210 16.7 0 14 212.63009643554688 710147 16.8 0 17 220.24763202667236 721355 16.9 0 5 72.98603820800781 217207 16.a 0 9 118.11988830566406 405243 16.b 0 17 261.47421073913574 822318 16.c 0 13 146.1312599182129 492984 16.d 0 14 177.95731925964355 564599 16.e 0 12 148.01206874847412 521814 16.f 0 14 195.41278457641602 646138 16.10 0 16 213.67611598968506 664704 16.11 0 13 149.605712890625 530920 16.12 0 6 53.71151638031006 193534 16.13 0 10 164.08148956298828 541205 16.14 0 18 298.43877506256104 969835 16.15 0 15 178.81976127624512 612723 16.16 0 16 185.0315399169922 609687 16.17 0 16 213.7925682067871 685970 16.18 0 7 102.77555465698242 355382 16.19 0 12 174.73625564575195 566747 16.1a 0 11 149.47032356262207 442862 16.1b 0 9 76.70710182189941 234997 16.1c 0 15 228.55529308319092 689091 16.1d 0 23 299.5473861694336 1019721 16.1e 0 13 204.62300395965576 668135 16.1f 0 15 210.5235834121704 706745 16.20 0 13 158.76861000061035 496776 16.21 0 11 132.43681621551514 454810 16.22 0 16 184.11674404144287 638401 16.23 0 14 207.3168830871582 726233 16.24 0 8 94.14496231079102 300377 16.25 0 9 154.20594692230225 490948 16.26 0 8 64.33307456970215 212119 16.27 0 8 81.12248992919922 280983 16.28 0 10 141.42350482940674 492944 16.29 0 9 134.44723510742188 480994 16.2a 0 11 162.04064846038818 555681 16.2b 0 15 191.3306827545166 693155 16.2c 0 14 175.26631832122803 602546 16.2d 0 11 142.69966793060303 521563 16.2e 0 11 165.8358974456787 497644 16.2f 0 17 230.93345642089844 774384 16.30 0 12 187.4592523574829 653213 16.31 0 9 155.2884464263916 471445 16.32 0 13 159.80840015411377 532461 16.33 0 14 202.63096618652344 685343 16.34 0 12 175.7840394973755 605797 16.35 0 17 264.1113214492798 839415 16.36 0 9 122.5332670211792 415767 16.37 0 17 262.4869623184204 896122 16.38 0 14 166.1954460144043 537803 16.39 0 17 242.67876815795898 748252 16.3a 0 10 142.9590721130371 473809 16.3b 0 10 95.30163097381592 320139 16.3c 0 15 216.7233304977417 721608 16.3d 0 10 125.26835060119629 416549 16.3e 0 13 199.71953010559082 698675 16.3f 0 12 175.03951740264893 602689 ``` k > On 13 Oct 2022, at 11:41, Yuji Ito (伊藤 祐司) <yuji-ito@xxxxxxxxxxxx> wrote: > > Unfortunately, the "large omap objects" message recurred last weekend. So I ran the script you showed to check the situation. `used_.*` is small, but `omap_.*` is large, which is strange. Do you have any idea what it is? _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx