Re: What could cause mon_osd_full_ratio to be exceeded?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Nov 26, 2018 at 10:28 AM Vladimir Brik
<vladimir.brik@xxxxxxxxxxxxxxxx> wrote:
>
> Hello
>
> I am doing some Ceph testing on a near-full cluster, and I noticed that,
> after I brought down a node, some OSDs' utilization reached
> osd_failsafe_full_ratio (97%). Why didn't it stop at mon_osd_full_ratio
> (90%) if mon_osd_backfillfull_ratio is 90%?

While I believe the very newest Ceph source will do this, it can be
surprisingly difficult to identify the exact size a PG will take up on
disk (thanks to omap/RocksDB data), and so for a long time we pretty
much didn't try — these ratios were checked when starting a backfill,
but we didn't try to predict where they would end up and limit
ourselves based on that.
-Greg

>
>
> Thanks,
>
> Vlad
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux