one pg blocked at ctive+undersized+degraded+remapped+backfilling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




We want to change index pool(radosgw) rule from sata to ssd, when we run ceph osd pool set default.rgw.buckets.index crush_ruleset x
All of index pg migrated to ssd, but only one pg is still stuck in sata and cannot be migrated
and it status is active+undersized+degraded+remapped+backfilling


ceph version : 10.2.5
default.rgw.buckets.index  size=2 min_size=1


How can I solve the problem of continuous backfilling?


 health HEALTH_WARN
 1 pgs backfilling
 1 pgs degraded
 1 pgs stuck unclean
 1 pgs undersized
 1 requests are blocked > 32 sec
 recovery 13/548944664 objects degraded (0.000%)
 recovery 31/548944664 objects misplaced (0.000%)
 monmap e1: 3 mons at {sa101=192.168.8.71:6789/0,sa102=192.168.8.72:6789/0,sa103=192.168.8.73:6789/0}
            election epoch 198, quorum 0,1,2 sa101,sa102,sa103
     osdmap e113094: 311 osds: 295 up, 295 in; 1 remapped pgs
            flags noout,noscrub,nodeep-scrub,sortbitwise,require_jewel_osds
      pgmap v62454723: 4752 pgs, 15 pools, 134 TB data, 174 Mobjects
            409 TB used, 1071 TB / 1481 TB avail
            13/548944664 objects degraded (0.000%)
            31/548944664 objects misplaced (0.000%)
                4751 active+clean
                   1 active+undersized+degraded+remapped+backfilling

[sa101 ~]# ceph pg map 11.28
osdmap e113094 pg 11.28 (11.28) -> up [251,254] acting [192]

[sa101 ~]# ceph health detail
HEALTH_WARN 1 pgs backfilling; 1 pgs degraded; 1 pgs stuck unclean; 1 pgs undersized; 1 requests are blocked > 32 sec; 1 osds have slow requests; recovery 13/548949428 objects degraded (0.000%); recovery 31/548949428 objects misplaced (0.000%); noout,noscrub,nodeep-scrub,sortbitwise,require_jewel_osds flag(s) set
pg 11.28 is stuck unclean for 624019.077931, current state active+undersized+degraded+remapped+backfilling, last acting [192]
pg 11.28 is active+undersized+degraded+remapped+backfilling, acting [192]
1 ops are blocked > 32.768 sec on osd.192
1 osds have slow requests
recovery 13/548949428 objects degraded (0.000%)
recovery 31/548949428 objects misplaced (0.000%)
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux