Re: Problem : "1 pools have many more objects per pg than average"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Injectargs causes an immediate runtime change; rebooting the mon would negate the change.

On Wed., Jan. 22, 2020, 4:41 p.m. St-Germain, Sylvain (SSC/SPC), <sylvain.st-germain@xxxxxxxxx> wrote:
///////////////////////// Problem ///////////////////////////

 I've got a Warning on my cluster that I cannot remove :

"1 pools have many more objects per pg than average"

Does somebody has some insight ? I think it's normal to have this warning because I have just one pool in use, but how can I remove this warning ?

Thx !

///////////////////////// INFORMATION ///////////////////////////

*** Here's some information about the cluster

# ceph health detail
HEALTH_WARN 1 pools have many more objects per pg than average MANY_OBJECTS_PER_PG 1 pools have many more objects per pg than average
    pool default.rgw.buckets.data objects per pg (50567) is more than 14.1249 times cluster average (3580)

# sudo ceph-conf -D | grep mon_pg_warn_max_object_skew mon_pg_warn_max_object_skew = 10.000000

# ceph daemon mon.dao-wkr-04 config show | grep mon_pg_warn_max_object_skew
    "mon_pg_warn_max_object_skew": "10.000000"

# ceph -v
ceph version 14.2.4 (75f4de193b3ea58512f204623e6c5a16e6c1e1ba) nautilus (stable)

# ceph df
RAW STORAGE:
    CLASS       SIZE            AVAIL       USED       RAW USED     %RAW USED
    hdd         873 TiB         823 TiB     50 TiB       51 TiB          5.80
    TOTAL       873 TiB         823 TiB     50 TiB       51 TiB          5.80

POOLS:
    POOL                                        ID     STORED      OBJECTS              USED        %USED     MAX AVAIL
    .rgw.root                                   11     3.5 KiB          8               1.5 MiB         0       249 TiB
    default.rgw.control                 12         0 B                  8                       0 B             0       249 TiB
    default.rgw.meta                    13      52 KiB          186                     34 MiB          0       249 TiB
    default.rgw.log                     14         0 B                  207             0 B             0       249 TiB
    default.rgw.buckets.index           15     1.2 GiB          131                     1.2 GiB                 0       249 TiB
    cephfs_data                         29     915 MiB          202                     1.5 GiB                 0       467 TiB
    cephfs_metadata                     30     145 KiB                  23              2.1 MiB                 0       249 TiB
    default.rgw.buckets.data            31      30 TiB                  12.95M          50 TiB          6.32  467 TiB


# ceph osd dump | grep default.rgw.buckets.data pool 31 'default.rgw.buckets.data' erasure size 8 min_size 6 crush_rule 2 object_hash rjenkins pg_num 256 pgp_num 256 autoscale_mode on last_change 9502 lfor 0/2191/7577 flags hashpspool stripe_width 20480 target_size_ratio 0.4 application rgw

/////////////////////////// SOLUTION TRIED ///////////////////////////

1- I try to increase the value of mon_pg_warn_max_object_skew parameter

#sudo ceph tell mon.* injectargs '--mon_pg_warn_max_object_skew 20'
#sudo ceph tell osd.* injectargs '--mon_pg_warn_max_object_skew 20'

+ And reboot the monitor

The parameter didn't change.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux