Re: How to remove remaining bucket index shard objects

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Thank you for your reply. Yesterday I ran compaction according to the following RedHat document (and deep scrub again).
ref. https://access.redhat.com/solutions/5173092
The large omap objects warning in this time looks to be resolved. However, based on our observations so far, it could reoccur within a few days. I'm going to run your script if it happens.

before the compaction:
$ kubectl exec -n ceph-poc deploy/rook-ceph-tools -- ceph osd df
ID  CLASS  WEIGHT    REWEIGHT  SIZE     RAW USE   DATA     OMAP     META     AVAIL     %USE  VAR    PGS  STATUS
<...snip...>
14    ssd   1.00000   1.00000    1 TiB    18 GiB   18 GiB  100 MiB  373 MiB  1006 GiB  1.80  90.08   98      up
23    ssd   1.00000   1.00000    1 TiB    11 GiB   11 GiB   55 MiB  629 MiB  1013 GiB  1.09  54.58  102      up
21    ssd   1.00000   1.00000    1 TiB    17 GiB   17 GiB   96 MiB  380 MiB  1007 GiB  1.69  84.57  100      up
22    ssd   1.00000   1.00000    1 TiB    12 GiB   12 GiB   18 MiB  395 MiB  1012 GiB  1.18  58.82  100      up
19    ssd   1.00000   1.00000    1 TiB    14 GiB   13 GiB   83 MiB  310 MiB  1010 GiB  1.33  66.59   93      up
20    ssd   1.00000   1.00000    1 TiB    16 GiB   15 GiB   93 MiB  612 MiB  1008 GiB  1.56  77.79  107      up

after the compaction, the OMAP size was reduced:
$ kubectl exec -n ceph-poc deploy/rook-ceph-tools -- ceph osd df
ID  CLASS  WEIGHT    REWEIGHT  SIZE     RAW USE   DATA     OMAP     META     AVAIL     %USE  VAR    PGS  STATUS
<...snip...>
14    ssd   1.00000   1.00000    1 TiB    18 GiB   18 GiB   40 MiB  115 MiB  1006 GiB  1.77  89.85   98      up
23    ssd   1.00000   1.00000    1 TiB    11 GiB   10 GiB   20 MiB   87 MiB  1013 GiB  1.04  52.54  102      up
21    ssd   1.00000   1.00000    1 TiB    17 GiB   17 GiB   42 MiB  111 MiB  1007 GiB  1.66  84.22  100      up
22    ssd   1.00000   1.00000    1 TiB    12 GiB   12 GiB   18 MiB   88 MiB  1012 GiB  1.15  58.15  100      up
19    ssd   1.00000   1.00000    1 TiB    13 GiB   13 GiB   18 MiB   96 MiB  1011 GiB  1.30  66.19   93      up
20    ssd   1.00000   1.00000    1 TiB    15 GiB   15 GiB   43 MiB  101 MiB  1009 GiB  1.50  76.17  107      up

Thanks,
Yuji
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux