PG allocations are not balanced across devices

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello, 

after update to Ceph Pacific 16.2.7 we’re facing issues with a lot of alerts from Alertmanager with message:

OSD osd.6 on ceph-01 deviates by more than 30% from average PG count.

Dashboard says 13.9 PGs per OSD, but this particular osd has only 9 PGs. 

For example another alert it’s saying 

OSD osd.19 on ceph-02.deviates by more than 30% from average PG count.

osd.19 has for example 19 PGs

Is this behaviour OK ? Do I need to clear CephPGImbalance alert from Alertmanager ? Or is something wrong ?

We have 3 servers with 6 SSD in each and we tested it for example on version 16.2.6 and we never had this type of alerts in past. 

Our pools are

nfs with 32 active+clean PG
cephfs.CephFS.data with 32 active+clean PG
cephfs.CephFS.meta with 32 active+clean PG
device_health_metrics with 1 active+clean PG

I all pools we are using just default PG Autoscale and replica 3.


Thank you.


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux