PG status is "active+undersized+degraded"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

 

I have setup a ceph cluster in my lab recently, the configuration per my understanding should be okay, 4 OSD across 3 nodes, 3 replicas, but couple of PG stuck with state “active+undersized+degraded”, I think this should be very generic issue, could anyone help me out?

 

Here is the details about the ceph cluster,

 

$ ceph -v          (jewel)

ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)

 

# ceph osd tree

ID WEIGHT  TYPE NAME      UP/DOWN REWEIGHT PRIMARY-AFFINITY

-1 5.89049 root default

-2 1.81360     host ceph3

2 1.81360         osd.2       up  1.00000          1.00000

-3 0.44969     host ceph4

3 0.44969         osd.3       up  1.00000          1.00000

-4 3.62720     host ceph1

0 1.81360         osd.0       up  1.00000          1.00000

1 1.81360         osd.1       up  1.00000          1.00000

 

 

# ceph health detail

HEALTH_WARN 2 pgs degraded; 2 pgs stuck degraded; 2 pgs stuck unclean; 2 pgs stuck undersized; 2 pgs undersized

pg 17.58 is stuck unclean for 61033.947719, current state active+undersized+degraded, last acting [2,0]

pg 17.16 is stuck unclean for 61033.948201, current state active+undersized+degraded, last acting [0,2]

pg 17.58 is stuck undersized for 61033.343824, current state active+undersized+degraded, last acting [2,0]

pg 17.16 is stuck undersized for 61033.327566, current state active+undersized+degraded, last acting [0,2]

pg 17.58 is stuck degraded for 61033.343835, current state active+undersized+degraded, last acting [2,0]

pg 17.16 is stuck degraded for 61033.327576, current state active+undersized+degraded, last acting [0,2]

pg 17.16 is active+undersized+degraded, acting [0,2]

pg 17.58 is active+undersized+degraded, acting [2,0]

 

 

 

# rados lspools

rbdbench

 

 

$ ceph osd pool get rbdbench size

size: 3

 

 

 

Where can I get the details about the issue?   Appreciate for any comments!

 

Best Regards,

Dave Chen

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux