Hi Fulvio, > leads to a much shorter and less detailed page, and I assumed Nautilus > was far behind Quincy in managing this... The only major change I'm aware of between Nautilus and Quincy is that in Quincy the mClock scheduler is able to automatically tune up/down backfill parameters to achieve better speed and/or balance with client I/O. The reservation mechanics themselves are unchanged. > Thanks for "pgremapper", will give it a try once I have finished current > data movement: will it still work after I upgrade to Pacific? We're not aware of any Pacific incompatibilities at this time (we've tested it there and community members have used it against Pacific), though the tool has most heavily been used on Luminous and Nautilus, as the README implies. > You are correct, it would be best to drain OSDs cleanly, and I see > pgremapper has an option for this, great! Despite its name, I don't usually recommend using the "drain" command for draining a batch of OSDs. Confusing, I know! "Drain" is best used when you intend to move the data back afterwards, and if you give it multiple targets, it won't balance data across those targets. The reason for this is that "drain" doesn't pay attention to the CRUSH-preferred PG location or target fullness, and thus it can make suboptimal placement choices. For your usecase, I would recommend using a downweight of OSDS on host to 0.001 (can't be 0 - upmaps won't work) -> cancel-backfill (to map data back to the host) -> undo-upmaps in a loop to optimally drain the host. Josh _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx