Re: objects misplaced jumps up at 5%

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2020-09-28 11:45, Jake Grimmett wrote:

> To show the cluster before and immediately after an "episode"
> 
> ***************************************************
> 
> [root@ceph7 ceph]# ceph -s
>   cluster:
>     id:     36ed7113-080c-49b8-80e2-4947cc456f2a
>     health: HEALTH_WARN
>             7 nearfull osd(s)
>             2 pool(s) nearfull
>             Low space hindering backfill (add storage if this doesn't
> resolve itself): 11 pgs backfill_toofull

What version are you running? I'm worried the nearfull OSDs might be the
culprit here. There has been a bug with respect to neafull OSDs [1] that
has been fixed since. You might or might not hit that. Check with "ceph
osd df" to see if there are OSDs really too full or not.

You can use Dan's upmap-remapped.py [2] to remap the PGs back to their
original location and get the cluster in HEALTH_OK again. You might want
to select deep-scrub by hand to make sure you get the most efficient way
of deep-scrubbing (instead of randomly choosing a PG to deep-scrub).

Gr. Stefan

[1]: https://tracker.ceph.com/issues/39555
[2]:
https://github.com/cernceph/ceph-scripts/blob/master/tools/upmap/upmap-remapped.py
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux