Fixing Remapped PG's

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

We have a ceph kraken cluster. Last week, we lost an OSD server. Then we added one more  OSD servers with same configuration.Then we let cluster to recover,but i think it didn't happened.Still most of PG's are stuck in remapped and in degraded state. When i restart all osd daemons, it itself fixes some pg's but not all.
I think there is a bug in ceph. Please tell me if you want more info to debug this.

root@node16:~# ceph -v
ceph version 11.2.1 (e0354f9d3b1eea1d75a7dd487ba8098311be38a7)

root@node15:~# ceph -s
    cluster 7c75f6e9-b858-4ac4-aa26-48ae1f33eda2
     health HEALTH_WARN
            361 pgs backfill_wait
            399 pgs degraded
            1 pgs recovering
            38 pgs recovery_wait
            399 pgs stuck degraded
            400 pgs stuck unclean
            361 pgs stuck undersized
            361 pgs undersized
            recovery 98076/465244 objects degraded (21.081%)
            recovery 102362/465244 objects misplaced (22.002%)
            recovery 1/153718 unfound (0.001%)
            pool cinder-volumes pg_num 300 > pgp_num 128
            pool ephemeral-vms pg_num 300 > pgp_num 128
            1 mons down, quorum 0,1 node15,node16
     monmap e2: 3 mons at {node15=10.0.5.15:6789/0,node16=10.0.5.16:6789/0,node17=10.0.5.17:6789/0}
            election epoch 1230, quorum 0,1 node15,node16
        mgr active: node15 standbys: node16
     osdmap e7924: 6 osds: 6 up, 6 in; 362 remapped pgs
            flags sortbitwise,require_jewel_osds,require_kraken_osds
      pgmap v16920461: 600 pgs, 2 pools, 586 GB data, 150 kobjects
            1400 GB used, 4165 GB / 5566 GB avail
            98076/465244 objects degraded (21.081%)
            102362/465244 objects misplaced (22.002%)
            1/153718 unfound (0.001%)
                 360 active+undersized+degraded+remapped+backfill_wait
                 200 active+clean
                  38 active+recovery_wait+degraded
                   1 active+recovering+undersized+degraded+remapped
                   1 active+remapped+backfill_wait
  client io 147 kB/s wr, 0 op/s rd, 21 op/s wr

root@node16:~# ceph osd tree
ID WEIGHT  TYPE NAME       UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 5.81839 root default                                      
-2 1.81839     host node9                                    
11 0.90919         osd.11       up  1.00000          1.00000 
 1 0.90919         osd.1        up  1.00000          1.00000 
-3 2.00000     host node10                                   
 0 1.00000         osd.0        up  1.00000          1.00000 
 2 1.00000         osd.2        up  1.00000          1.00000 
-4 2.00000     host node8                                    
 3 1.00000         osd.3        up  1.00000          1.00000 
 6 1.00000         osd.6        up  1.00000          1.00000




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux