Is the autoscaler running [1]? You can see the status with:
ceph osd pool autoscale-status
If it's turned off you can enable warn mode first to see what it would do:
ceph osd pool set <pool> pg_autoscale_mode warn
If the autoscaler doesn't help you could increase the pg_num manually
to 512 and see how the distribution changes.
[1] https://docs.ceph.com/en/pacific/rados/operations/placement-groups/
Zitat von mailing-lists <mailing-lists@xxxxxxxxx>:
Dear Ceph-Users,
i've recently setup a 4.3P Ceph-Cluster with cephadm.
I am seeing that the health is ok, as seen here:
ceph -s
cluster:
id: 8038f0xxx
health: HEALTH_OK
services:
mon: 5 daemons, quorum
ceph-a2-07,ceph-a1-01,ceph-a1-10,ceph-a2-01,ceph-a1-05 (age 3w)
mgr: ceph-a1-01.mkptvb(active, since 2d), standbys: ceph-a2-01.bznood
osd: 306 osds: 306 up (since 3w), 306 in (since 3w)
rgw: 2 daemons active (2 hosts, 1 zones)
data:
pools: 7 pools, 420 pgs
objects: 7.74M objects, 30 TiB
usage: 45 TiB used, 4.3 PiB / 4.3 PiB avail
pgs: 420 active+clean
But the Monitoring from the dashboard tells me, "CephPGImbalance"
for several OSDs. The balancer is enabled and set to upmap.
ceph balancer status
{
"active": true,
"last_optimize_duration": "0:00:00.011314",
"last_optimize_started": "Mon Sep 26 14:23:32 2022",
"mode": "upmap",
"optimize_result": "Unable to find further optimization, or
pool(s) pg_num is decreasing, or distribution is already perfect",
"plans": []
}
My main datapool is not yet filled by much. Its roughly 50T filled
and I've set it to 256 PG_num. It is a 4+2 EC pool.
The average PG per OSD is 6.6, but actually some OSDs have 1, and
some have up to 13 PGs... so it is in fact very unbalanced, but I
don't know how to solve this, since the balancer is telling me, that
everything is just fine. Do you have a hint for me?
Best
Ken
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx