Hello Iban, > After upgrading the monitors and mgrs to octopus (15.2.16) the system told me that some pools did not have the correct pg_nums, some of them above the optimum and one of them the busiest below 256 of 1024 required. > I corrected the values, and configured them according to the recommendations except for the one that had to be increased from 256 to 512. It's not usually a good idea to change anything about system configuration until you've completed updating all of the daemons, especially during major upgrades. In this case, though, I don't think it's caused any harm. > The thing is that since 2 days ago of the change and the system appears as healthy, but continuously with pgs in remapping. The curious thing is that when it reaches 5% of objects mispaced, this value changes again (usually from 40 pgs to 45-48): In all recent Ceph versions, when one changes the pg count, the system will actually gradually split PGs and then gradually increase the pgp count in the background until the target state is reached. You can see progress via 'ceph osd pool ls detail' (pgp_num and pgp_num_target). The amount of backfill that gets scheduled is controlled by target_max_misplaced_ratio (ceph config help target_max_misplaced_ratio), which is 5% by default. After this completes, the balancer will then take over and start rebalancing the system. > I have to wait for the process to finish, or there is something wrong with the configuration that needs to be fixed. You could set 'nopgchange' on the pool if pgp_num != pgp_num_target and then 'ceph balancer off', wait for backfill to drain, complete the upgrade, then undo both of those settings. That might be the safest option, though if it seems close to finishing you may as well just wait, IMO. The danger here, though evidently minor, is that one really shouldn't be doing anything during a major upgrade other than upgrading, so the path chosen should be the one that results in the least change to the system. Josh _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx