We had this with older Ceph versions, maybe just try to restart all OSDs of affected PGs.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.verges@xxxxxxxx
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.verges@xxxxxxxx
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx
Am So., 3. Nov. 2019 um 20:13 Uhr schrieb Kári Bertilsson <karibertils@xxxxxxxxx>:
_______________________________________________pgs: 14.377% pgs not active
3749681/537818808 objects misplaced (0.697%)
810 active+clean
156 down
124 active+remapped+backfilling
1 active+remapped+backfill_toofull
1 down+inconsistentwhen looking at the down pg's all disks are online41.3db 53775 0 0 0 401643186092 0 0 3044 down 6m 161222'303144 162913:4630171 [32,96,128,115,86,129,113,124,57,109]p32 [32,96,128,115,86,129,113,124,57,109]p32 2019-11-03Any way to see why the pg is down ?
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx