Quoting Wido den Hollander (wido@xxxxxxxx): > > > On 1/19/20 12:07 PM, Stefan Kooman wrote: > > Hi, > > > > Is there any logic / filtering which PGs to backfill at any given time > > that takes into account the OSD the PG is living on? > > > > Our cluster is backfilling a complete pool now (512 PGs) and (currently) > > of the 7 active+remapped+backfilling there are 4 of them on the same > > OSD. Which stresses this OSD way more than needed. It would be nice if > > the selection criteria for which PG to backfill (and / or recover) from > > would include the OSD as selection criterion in order to spread the load > > across different OSDs. > > > > Afaik the OSDs decide this themselves. The primary OSD of a PG will > negotiate with other OSDs to determine if they can backfill or not. I found this: https://docs.ceph.com/docs/master/dev/osd_internals/backfill_reservation > If you set max_backfills to 4 the OSDs will try to use this as much as > possible. It was set to "1" > The MONs do not decide which OSD starts to backfill and which don't. Check. So this is one of the drawbacks of distributed I guess. Gr. Stefan -- | BIT BV https://www.bit.nl/ Kamer van Koophandel 09090351 | GPG: 0xD14839C6 +31 318 648 688 / info@xxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx