Re: backfillfull osd - but it is only at 68% capacity

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



That problem seems to have cleared up.  We are in the middle of a massive rebalancing effort for a 700 OSD, 10PB cluster that is wildly out of whack (because it got too full) and see lots of strange numbers reported occasionally.


________________________________
From: Eugen Block <eblock@xxxxxx>
Sent: Thursday, August 25, 2022 2:56 PM
To: Wyll Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
Cc: ceph-users@xxxxxxx <ceph-users@xxxxxxx>
Subject: Re:  backfillfull osd - but it is only at 68% capacity

Hi,

I’ve seen this many times in older clusters, mostly Nautilus (can’t
say much about Octopus or later). Apparently the root cause hasn’t
been fixed yet, but it should resolve after the recovery has finished.

Zitat von Wyll Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>:

> My cluster (ceph pacific) is complaining about one of the OSD being
> backfillfull:
>
> [WRN] OSD_BACKFILLFULL: 1 backfillfull osd(s)
>
>     osd.31 is backfill full
>
> backfillfull ratios:
>
> full_ratio 0.95
>
> backfillfull_ratio 0.9
>
> nearfull_ratio 0.85
>
> ceph osd df shows:
>
>  31    hdd  5.55899   1.00000  5.6 TiB  3.8 TiB  3.7 TiB  411 MiB
> 6.7 GiB   1.8 TiB  68.13  0.92   83      up
>
> So, why does the cluster think that osd.31 is backfillfull if its
> only at 68% capacity?
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux