On Mon, Nov 26, 2018 at 10:28 AM Vladimir Brik <vladimir.brik@xxxxxxxxxxxxxxxx> wrote: > > Hello > > I am doing some Ceph testing on a near-full cluster, and I noticed that, > after I brought down a node, some OSDs' utilization reached > osd_failsafe_full_ratio (97%). Why didn't it stop at mon_osd_full_ratio > (90%) if mon_osd_backfillfull_ratio is 90%? While I believe the very newest Ceph source will do this, it can be surprisingly difficult to identify the exact size a PG will take up on disk (thanks to omap/RocksDB data), and so for a long time we pretty much didn't try — these ratios were checked when starting a backfill, but we didn't try to predict where they would end up and limit ourselves based on that. -Greg > > > Thanks, > > Vlad > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com