Hi Rok, On Mon, 23 Dec 2024 at 07:28, Rok Jaklič <rjaklic@xxxxxxxxx> wrote: > However I now see that autoscaler is probably not working because of: > > ceph-mgr.ctplmon1.log:2024-12-23T07:12:00.921+0100 7f949edad640 0 > [pg_autoscaler WARNING root] pool default.rgw.buckets.index won't scale due > to overlapping roots: {-1, -18} > ceph-mgr.ctplmon1.log:2024-12-23T07:12:00.923+0100 7f949edad640 0 > [pg_autoscaler WARNING root] pool default.rgw.buckets.data won't scale due > to overlapping roots: {-2, -1, -18} > ceph-mgr.ctplmon1.log:2024-12-23T07:12:00.929+0100 7f949edad640 0 > [pg_autoscaler WARNING root] pool 1 contains an overlapping root -1... > skipping scaling > ceph-mgr.ctplmon1.log:2024-12-23T07:12:00.929+0100 7f949edad640 0 > [pg_autoscaler WARNING root] pool 2 contains an overlapping root -1... > skipping scaling > ceph-mgr.ctplmon1.log:2024-12-23T07:12:00.930+0100 7f949edad640 0 > [pg_autoscaler WARNING root] pool 3 contains an overlapping root -1... > skipping scaling > ceph-mgr.ctplmon1.log:2024-12-23T07:12:00.931+0100 7f949edad640 0 > [pg_autoscaler WARNING root] pool 4 contains an overlapping root -1... > skipping scaling > ceph-mgr.ctplmon1.log:2024-12-23T07:12:00.931+0100 7f949edad640 0 > [pg_autoscaler WARNING root] pool 5 contains an overlapping root -1... > skipping scaling > ceph-mgr.ctplmon1.log:2024-12-23T07:12:00.932+0100 7f949edad640 0 > [pg_autoscaler WARNING root] pool 6 contains an overlapping root -18... > skipping scaling > ceph-mgr.ctplmon1.log:2024-12-23T07:12:00.932+0100 7f949edad640 0 > [pg_autoscaler WARNING root] pool 7 contains an overlapping root -1... > skipping scaling > ceph-mgr.ctplmon1.log:2024-12-23T07:12:00.933+0100 7f949edad640 0 > [pg_autoscaler WARNING root] pool 9 contains an overlapping root -2... > skipping scaling > ceph-mgr.ctplmon1.log:2024-12-23T07:12:00.934+0100 7f949edad640 0 > [pg_autoscaler WARNING root] pool 10 contains an overlapping root -1... > skipping scaling > ceph-mgr.ctplmon1.log:2024-12-23T07:12:00.934+0100 7f949edad640 0 > [pg_autoscaler WARNING root] pool 11 contains an overlapping root -1... > skipping scaling > This has been answered by Eugen in the other thread. ;) > > Rok > > On Mon, Dec 23, 2024 at 6:45 AM Rok Jaklič <rjaklic@xxxxxxxxx> wrote: > >> autoscale_mode for pg is on for a particular pool >> (default.rgw.buckets.data) and EC 3-2 is used. During pool lifetime I've >> seen one time that PG number have changed automatically, but now I am also >> considering changing PG number manually after backfills completes. >> >> Right now pg_num 512 pgp_num 512 is used and I am considering to change >> it to 1024. Do you think that would be too aggressive maybe? >> > If my calculation is correct then your PGs are ~242 GB in size, which is not bad. In general crush may distribute PGs better. It makes definitely sense to increase the number if you expect twice the amount of data to be stored. Cheers, Alwin croit GmbH, https://croit.io/ _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx