I set up a test cluster (Pacific 16.2.7 deployed with cephadm) with
several hdds of different sizes, 1.8 Tb and 3.6 TB; they have weight 1.8
and 3.6, respectively, with 2 pools (metadata+data for CephFS). I'm
currently having a PG count varying from 177 to 182 for OSDs with small
disks and from 344 to 352 for big disks. To me everything looks fine:
big s have more PGs than small ones, and the ratio reflects the disk
weights ratio quite nicely.
Still I have one "high pg count deviation" warning for all the big OSDs
in the monitoring section of the Ceph dashboard, with messages like this:
OSD osd.4 on bofur deviates by more than 30% from average PG count.
I don't understand the reason of these warnings since as I explained
above the PG count looks good to me. It just looks like the monitoring
doesn't take the disk weights into account, considering only the raw PG
count for this metric and thus inevitably generating a warning. Can this
be true? If so, is this the intended behavior or a bug?
Thanks in advance for any help.
Nicola
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx