Hi Thomas, I lost track of your issue. Are you just trying to balance the PGs ? 14.2.8 has big improvements -- check the release notes / blog post about setting the upmap_max_deviations down to 2 or 1. -- Dan On Mon, Mar 16, 2020 at 4:00 PM Thomas Schneider <74cmonty@xxxxxxxxx> wrote: > > Hi Dan, > > I have opened this this bug report for balancer not working as expected. > https://tracker.ceph.com/issues/43586 > > Then I thought it could make sense to balance the cluster manually by > means of moving PGs from a heavily loaded OSD to another. > > I found your slides "Luminous: pg upmap (dev) > <https://indico.cern.ch/event/669931/contributions/2742401/attachments/1533434/2401109/upmap.pdf>", > but I didn't fully understand. > > Could you please advise how to move PGs manually? > > Regards > Thomas > > Am 23.01.2020 um 16:05 schrieb Dan van der Ster: > > Hi Frank, > > > > No, it is basically balancing the num_pgs per TB (per osd). > > > > Cheers, Dan > > > > > > On Thu, Jan 23, 2020 at 3:53 PM Frank R <frankaritchie@xxxxxxxxx > > <mailto:frankaritchie@xxxxxxxxx>> wrote: > > > > Hi all, > > > > Does using the Upmap balancer require that all OSDs be the same size > > (per device class)? > > > > thx > > Frank > > _______________________________________________ > > ceph-users mailing list -- ceph-users@xxxxxxx > > <mailto:ceph-users@xxxxxxx> > > To unsubscribe send an email to ceph-users-leave@xxxxxxx > > <mailto:ceph-users-leave@xxxxxxx> > > > > > > _______________________________________________ > > ceph-users mailing list -- ceph-users@xxxxxxx > > To unsubscribe send an email to ceph-users-leave@xxxxxxx > > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx