pg's stuck for 4-5 days after reaching backfill_toofull

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Guys,

We ran into this issue after we nearly max’ed out the sod’s. Since then, we have cleaned up a lot of data in the sod’s but pg’s seem to stuck for last 4 to 5 days. I have run "ceph osd reweight-by-utilization and that did not seem to work.

Any suggestions? 


ceph -s
    cluster 909c7fe9-0012-4c27-8087-01497c661511
     health HEALTH_WARN 224 pgs backfill; 130 pgs backfill_toofull; 86 pgs backfilling; 4 pgs degraded; 14 pgs recovery_wait; 324 pgs stuck unclean; recovery -11922/573322 objects degraded (-2.079%)
     monmap e5: 5 mons at {Lab-mon001=x.x.96.12:6789/0,Lab-mon002=x.x.96.13:6789/0,Lab-mon003=x.x.96.14:6789/0,Lab-mon004=x.x.96.15:6789/0,Lab-mon005=x.x.96.16:6789/0}, election epoch 28, quorum 0,1,2,3,4 Lab-mon001,Lab-mon002,Lab-mon003,Lab-mon004,Lab-mon005
     mdsmap e6: 1/1/1 up {0=Lab-mon001=up:active}
     osdmap e10598: 495 osds: 492 up, 492 in
      pgmap v1827231: 21568 pgs, 3 pools, 221 GB data, 184 kobjects
            4142 GB used, 4982 GB / 9624 GB avail
            -11922/573322 objects degraded (-2.079%)
                   9 active+recovery_wait
               21244 active+clean
                  90 active+remapped+wait_backfill
                   5 active+recovery_wait+remapped
                   4 active+degraded+remapped+wait_backfill
                 130 active+remapped+wait_backfill+backfill_toofull
                  86 active+remapped+backfilling
  client io 0 B/s rd, 0 op/s

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux