Re: backfill_toofull after adding new OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yeah, this happens all the time during backfilling since Mimic and is
some kind of bug.
It will always resolve itself, but it's still quite annoying.


Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Wed, Mar 6, 2019 at 3:33 PM Simon Ironside <sironside@xxxxxxxxxxxxx> wrote:
>
> I've just seen this when *removing* an OSD too.
> Issue resolved itself during recovery. OSDs were not full, not even
> close, there's virtually nothing on this cluster.
> Mimic 13.2.4 on RHEL 7.6. OSDs are all Bluestore HDD with SSD DBs.
> Everything is otherwise default.
>
>    cluster:
>      id:     MY ID
>      health: HEALTH_ERR
>              1161/66039 objects misplaced (1.758%)
>              Degraded data redundancy: 220095/66039 objects degraded
> (333.280%), 137 pgs degraded
>              Degraded data redundancy (low space): 1 pg backfill_toofull
>
>    services:
>      mon: 3 daemons, quorum san2-mon1,san2-mon2,san2-mon3
>      mgr: san2-mon1(active), standbys: san2-mon2, san2-mon3
>      osd: 53 osds: 52 up, 52 in; 186 remapped pgs
>
>    data:
>      pools:   16 pools, 2016 pgs
>      objects: 22.01 k objects, 83 GiB
>      usage:   7.9 TiB used, 473 TiB / 481 TiB avail
>      pgs:     220095/66039 objects degraded (333.280%)
>               1161/66039 objects misplaced (1.758%)
>               1830 active+clean
>               134  active+recovery_wait+undersized+degraded+remapped
>               45   active+remapped+backfill_wait
>               3    active+recovering+undersized+remapped
>               3    active+recovery_wait+undersized+degraded
>               1    active+remapped+backfill_wait+backfill_toofull
>
>    io:
>      client:   60 KiB/s wr, 0 op/s rd, 5 op/s wr
>      recovery: 8.6 MiB/s, 110 objects/s
>
>
> On 07/02/2019 04:26, Brad Hubbard wrote:
> > Let's try to restrict discussion to the original thread
> > "backfill_toofull while OSDs are not full" and get a tracker opened up
> > for this issue.
> >
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux