2 pgs stuck in undersized after cluster recovery

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi list,
	After recovery from losting some of the osds I got 2 pgs stuck in undersized.

         ceph health detail returns
PG_DEGRADED Degraded data redundancy: 2 pgs undersized
    pg 4.2 is stuck undersized for 3081.012062, current state active+undersized, last acting [13]
    pg 4.33 is stuck undersized for 3079.888961, current state active+undersized, last acting [35,2]

         I tried the following methond noe of them worked:
	reboot the osds
         mark out related osds
         change pool pg num
         ceph pg force_recovery



         Ceph is luminous 12.2.4
         with balancer on in up-map mode
 
         ceph osd pool ls detail result:
pool 3 'ec_rbd_pool' erasure size 6 min_size 5 crush_rule 2 object_hash rjenkins pg_num 1024 pgp_num 1024 last_change 4345 flags hashpspool,ec_overwrites stripe_width 16384 application rbd
pool 4 'rbd_pool' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 256 pgp_num 256 last_change 4606 lfor 0/4603 flags hashpspool stripe_width 0 application rbd
        removed_snaps [1~3]

         
         ceph osd tree result:
ID  CLASS WEIGHT    TYPE NAME           STATUS REWEIGHT PRI-AFF
 -1       418.39743 root default
 -3        18.19119     host arms003-01
  0   hdd   9.09560         osd.0           up  1.00000 1.00000
  1   hdd   9.09560         osd.1           up  1.00000 1.00000
 -5        18.19119     host arms003-02
  2   hdd   9.09560         osd.2           up  1.00000 1.00000
  3   hdd   9.09560         osd.3           up  1.00000 1.00000
 -7        18.19119     host arms003-03
  4   hdd   9.09560         osd.4           up  1.00000 1.00000
  5   hdd   9.09560         osd.5           up  1.00000 1.00000
 -9        18.19119     host arms003-04
  6   hdd   9.09560         osd.6           up  1.00000 1.00000
  7   hdd   9.09560         osd.7           up  1.00000 1.00000
-11        18.19119     host arms003-05
  8   hdd   9.09560         osd.8           up  1.00000 1.00000
  9   hdd   9.09560         osd.9           up  1.00000 1.00000
-13        18.19119     host arms003-06
 10   hdd   9.09560         osd.10          up  1.00000 1.00000
 11   hdd   9.09560         osd.11          up  1.00000 1.00000
-15        18.19119     host arms003-07
 12   hdd   9.09560         osd.12          up  1.00000 1.00000
 13   hdd   9.09560         osd.13          up  1.00000 1.00000
-17        18.19119     host arms003-08
 14   hdd   9.09560         osd.14          up  1.00000 1.00000
 15   hdd   9.09560         osd.15          up  1.00000 1.00000
-19        18.19119     host arms003-09
 16   hdd   9.09560         osd.16          up  1.00000 1.00000
 17   hdd   9.09560         osd.17          up  1.00000 1.00000
-21        18.19119     host arms003-10
 18   hdd   9.09560         osd.18          up  1.00000 1.00000
 19   hdd   9.09560         osd.19          up  1.00000 1.00000
-25        18.19119     host arms003-12
 22   hdd   9.09560         osd.22          up  1.00000 1.00000
 23   hdd   9.09560         osd.23          up  1.00000 1.00000
-27        18.19119     host arms004-01
 24   hdd   9.09560         osd.24          up  1.00000 1.00000
 25   hdd   9.09560         osd.25          up  1.00000 1.00000
-29        18.19119     host arms004-02
 26   hdd   9.09560         osd.26          up  1.00000 1.00000
 27   hdd   9.09560         osd.27          up  1.00000 1.00000
-31        18.19119     host arms004-03
 28   hdd   9.09560         osd.28          up  1.00000 1.00000
 29   hdd   9.09560         osd.29          up  1.00000 1.00000
-33        18.19119     host arms004-04
 30   hdd   9.09560         osd.30          up  1.00000 1.00000
 31   hdd   9.09560         osd.31          up  1.00000 1.00000
-35        18.19119     host arms004-05
 32   hdd   9.09560         osd.32          up  1.00000 1.00000
 33   hdd   9.09560         osd.33          up  1.00000 1.00000
-37        18.19119     host arms004-06
 34   hdd   9.09560         osd.34          up  1.00000 1.00000
 35   hdd   9.09560         osd.35          up  1.00000 1.00000
-39        18.19119     host arms004-07
 36   hdd   9.09560         osd.36          up  1.00000 1.00000
 37   hdd   9.09560         osd.37          up  1.00000 1.00000
-41        18.19119     host arms004-08
 38   hdd   9.09560         osd.38          up  1.00000 1.00000
 39   hdd   9.09560         osd.39          up  1.00000 1.00000
-43         9.09560     host arms004-09
 40   hdd   9.09560         osd.40          up  1.00000 1.00000
-45        18.19119     host arms004-10
 42   hdd   9.09560         osd.42          up  1.00000 1.00000
 43   hdd   9.09560         osd.43          up  1.00000 1.00000
-47        18.19119     host arms004-11
 44   hdd   9.09560         osd.44          up  1.00000 1.00000
 45   hdd   9.09560         osd.45          up  1.00000 1.00000
-49        18.19119     host arms004-12
 46   hdd   9.09560         osd.46          up  1.00000 1.00000
 47   hdd   9.09560         osd.47          up  1.00000 1.00000
-51         9.09560     host mnv001
 48   hdd   9.09560         osd.48          up  1.00000 1.00000


crush rule for pool id=4
 {
            "rule_id": 0,
            "rule_name": "replicated_rule",
            "ruleset": 0,
            "type": 1,
            "min_size": 1,
            "max_size": 10,
            "steps": [
                {
                    "op": "take",
                    "item": -1,
                    "item_name": "default"
                },
                {
                    "op": "chooseleaf_firstn",
                    "num": 0,
                    "type": "host"
                },
                {
                    "op": "emit"
                }
            ]
        }

        

2018-06-30
shadow_lin
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux