Re: Quincy full osd(s)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Can you send along the return of "ceph osd pool ls detail" and "ceph health
detail"

On Sun, Jul 24, 2022, 1:00 AM Nigel Williams <nigel.williams@xxxxxxxxxxx>
wrote:

> With current 17.2.1 (cephadm) I am seeing an unusual HEALTH_ERR
> Adding files to a new empty cluster, replica 3 (crush is by host), OSDs
> became 95% full and reweighting them to any value does not cause backfill
> to start.
>
> If I reweight the three too full OSDs to 0.0 I get a large number of
> misplaced objects but no subsequent data movement, cluster remains at
> HEALTH_WARN "Low space hindering backfill". Cluster has 1200 OSDs (all
> except three are close to empty).
>
> Balancer is on, autoscale is on for pool.
>
> I feel I am overlooking something obvious, if anyone can suggest what it
> would be appreciated. thanks.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux