Hi, The large omap alert looks resolved last week, Although I don't know the underlying reasons. When I got your email and tried to get the data, I noticed that the alerts had stopped. OMAP was 0 Bytes as follows. To make sure, I ran a deep scrub and waited for a while, but the alert has not recurred until now. Before the alerts stopped, the other team restarted the node where OSD and other modules were running for maintenance, which may have had an impact. However, reboots are done every week and have been done three times after compaction. It is therefore uncertain as to the root reason. There is a possibility of recurrence, so I will take a wait-and-see approach. ``` $ kubectl exec -n ceph-poc deploy/rook-ceph-tools -- ceph -s cluster: id: 49bd471e-84e6-412e-8ed0-75d7bc176657 health: HEALTH_OK services: mon: 3 daemons, quorum b,d,f (age 4d) mgr: b(active, since 4d), standbys: a osd: 96 osds: 96 up (since 4d), 96 in (since 4d) rgw: 6 daemons active (6 hosts, 2 zones) data: pools: 16 pools, 4432 pgs objects: 10.19k objects, 34 GiB usage: 161 GiB used, 787 TiB / 787 TiB avail pgs: 4432 active+clean io: client: 3.1 KiB/s rd, 931 B/s wr, 3 op/s rd, 2 op/s wr $ OSD_POOL=ceph-poc-object-store-ssd-index.rgw.buckets.index $ (header="id used_mbytes used_objects omap_used_mbytes omap_used_keys" > echo "${header}" > echo "${header}" | tr '[[:alpha:]_' '-' > kubectl exec -n ceph-poc deploy/rook-ceph-tools -- ceph pg ls-by-pool "${OSD_POOL}" --format=json | jq -r '.pg_stats | > sort_by(.stat_sum.num_bytes) | .[] | (.pgid, .stat_sum.num_bytes/1024/1024, > .stat_sum.num_objects, .stat_sum.num_omap_bytes/1024/1024, > .stat_sum.num_omap_keys)' | paste - - - - -) | column -t id used_mbytes used_objects omap_used_mbytes omap_used_keys -- ----------- ------------ ---------------- -------------- 6.0 0 0 0 0 6.1 0 0 0 0 6.2 0 0 0 0 6.3 0 0 0 0 6.4 0 1 0 0 6.5 0 1 0 0 6.6 0 0 0 0 6.7 0 0 0 0 6.8 0 0 0 0 6.9 0 0 0 0 6.a 0 0 0 0 6.b 0 0 0 0 6.c 0 0 0 0 6.d 0 0 0 0 6.e 0 0 0 0 6.f 0 1 0 0 6.10 0 1 0 0 6.11 0 0 0 0 6.12 0 0 0 0 6.13 0 0 0 0 6.14 0 0 0 0 6.15 0 0 0 0 6.16 0 0 0 0 6.17 0 0 0 0 6.18 0 0 0 0 6.19 0 1 0 0 6.1a 0 1 0 0 6.1b 0 0 0 0 6.1c 0 0 0 0 6.1d 0 0 0 0 6.1e 0 1 0 0 6.1f 0 0 0 0 6.20 0 1 0 0 6.21 0 0 0 0 6.22 0 0 0 0 6.23 0 0 0 0 6.24 0 0 0 0 6.25 0 0 0 0 6.26 0 0 0 0 6.27 0 1 0 0 6.28 0 0 0 0 6.29 0 0 0 0 6.2a 0 1 0 0 6.2b 0 0 0 0 6.2c 0 0 0 0 6.2d 0 0 0 0 6.2e 0 0 0 0 6.2f 0 0 0 0 6.30 0 0 0 0 6.31 0 1 0 0 6.32 0 1 0 0 6.33 0 0 0 0 6.34 0 0 0 0 6.35 0 0 0 0 6.36 0 0 0 0 6.37 0 0 0 0 6.38 0 0 0 0 6.39 0 0 0 0 6.3a 0 0 0 0 6.3b 0 0 0 0 6.3c 0 0 0 0 6.3d 0 0 0 0 6.3e 0 0 0 0 6.3f 0 0 0 0 6.40 0 0 0 0 6.41 0 1 0 0 6.42 0 0 0 0 6.43 0 0 0 0 6.44 0 0 0 0 6.45 0 1 0 0 6.46 0 0 0 0 6.47 0 0 0 0 6.48 0 0 0 0 6.49 0 0 0 0 6.4a 0 0 0 0 6.4b 0 0 0 0 6.4c 0 0 0 0 6.4d 0 0 0 0 6.4e 0 1 0 0 6.4f 0 0 0 0 6.50 0 0 0 0 6.51 0 1 0 0 6.52 0 0 0 0 6.53 0 0 0 0 6.54 0 0 0 0 6.55 0 0 0 0 6.56 0 0 0 0 6.57 0 0 0 0 6.58 0 0 0 0 6.59 0 0 0 0 6.5a 0 0 0 0 6.5b 0 0 0 0 6.5c 0 0 0 0 6.5d 0 1 0 0 6.5e 0 0 0 0 6.5f 0 0 0 0 6.60 0 0 0 0 6.61 0 0 0 0 6.62 0 0 0 0 6.63 0 0 0 0 6.64 0 0 0 0 6.65 0 0 0 0 6.66 0 1 0 0 6.67 0 0 0 0 6.68 0 0 0 0 6.69 0 0 0 0 6.6a 0 0 0 0 6.6b 0 0 0 0 6.6c 0 0 0 0 6.6d 0 0 0 0 6.6e 0 0 0 0 6.6f 0 0 0 0 6.70 0 3 0 0 6.71 0 0 0 0 6.72 0 0 0 0 6.73 0 0 0 0 6.74 0 0 0 0 6.75 0 0 0 0 6.76 0 0 0 0 6.77 0 0 0 0 6.78 0 0 0 0 6.79 0 0 0 0 6.7a 0 1 0 0 6.7b 0 0 0 0 6.7c 0 0 0 0 6.7d 0 0 0 0 6.7e 0 0 0 0 6.7f 0 0 0 0 ``` Thanks, Yuji From: Konstantin Shalygin <k0ste@xxxxxxxx> Sent: Wednesday, October 19, 2022 16:42 To: Yuji Ito (伊藤 祐司) <yuji-ito@xxxxxxxxxxxx> Cc: ceph-users@xxxxxxx <ceph-users@xxxxxxx> Subject: Re: How to remove remaining bucket index shard objects This strange stats, at least one object should be exists for this OMAP's. Try to deep-scrub this PG, try to list objects in this PG `rados ls --pgid 6.2` k Sent from my iPhone _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx