Hi Fulvio, https://docs.ceph.com/en/quincy/dev/osd_internals/backfill_reservation/ describes the prioritization and reservation mechanism used for recovery and backfill. AIUI, unless a PG is below min_size, all backfills for a given pool will be at the same priority. force-recovery will modify the PG priority but doing so can have a very delayed effect because a given backfill can be waiting behind a bunch of other backfills that have acquired partial reservations, which in turn are waiting behind other backfills that have partial reservations, etc. etc. Once one is doing degraded backfill, they've lost a lot of control over their system. Rather than ripping out hosts like you did here, operators that want to retain control will drain hosts without degradation. https://github.com/digitalocean/pgremapper is one tool that can help with this, though depending on the size of the system one can sometimes simply downweight the host and then wait. Josh On Thu, Sep 22, 2022 at 6:35 AM Fulvio Galeazzi <fulvio.galeazzi@xxxxxxx> wrote: > > Hallo all, > taking advantage of the redundancy of my EC pool, I destroyed a > couple of servers in order to reinstall them with a new operating system. > I am on Nautilus (but will evolve soon to Pacific), and today I am > not in "emergency mode": this is just for my education. :-) > > "ceph pg dump" shows a couple pg's with 3 missing chunks, some other > with 2, several with 1 missing chunk: that's fine and expected. > Having looked at it for a while, I think I understand the recovery queue > is unique: there is no internal higher priority for 3-missing-chunks PGs > wrt 1-missing-chunk PGs, right? > I tried to issue "ceph pg force-recovery" on the few worst-degraded PGs > but, apparently, numbers of 3-missing 2-missing and 1-missing are going > down at the same relative speed. > Is this expected? Can I do something to "guide" the process? > > Thanks for your hints > > Fulvio > > -- > Fulvio Galeazzi > GARR-CSD Department > skype: fgaleazzi70 > tel.: +39-334-6533-250 > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx