pg stuck in peering

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi
I have some trouble on a test cluster.
Many PGs are stuck in "peering" state from yesterday:

pg 1.2 is peering, acting [3,4]
pg 0.3 is peering, acting [3,4]
pg 2.1 is peering, acting [3,4]
pg 3.0 is peering, acting [3,4]
pg 1.3 is peering, acting [1,4]
pg 0.2 is peering, acting [3,4]
pg 2.0 is peering, acting [3,4]
pg 3.1 is peering, acting [1,4]
pg 4.6 is down+peering, acting [4,0]
pg 5.7 is peering, acting [3,1]
pg 6.4 is down+peering, acting [4,0]
pg 7.5 is peering, acting [3,1]
pg 0.1 is active+degraded, acting [4,0]
pg 2.3 is peering, acting [3,4]
pg 3.2 is peering, acting [3,4]
pg 4.5 is down+peering, acting [2,4]
pg 5.4 is down+peering, acting [2,4]
pg 6.7 is peering, acting [0,4]
pg 7.6 is peering, acting [0,4]
pg 1.1 is peering, acting [3,4]
pg 0.0 is active+degraded, acting [4,0]
pg 2.2 is peering, acting [1,4]
pg 3.3 is peering, acting [2,4]
pg 4.4 is peering, acting [1,4]
pg 5.5 is down+peering, acting [4,0]
pg 6.6 is peering, acting [3,1]
pg 7.7 is down+peering, acting [2,4]

HEALTH_WARN 332 pgs degraded; 316 pgs down; 316 pgs peering; 93 pgs
stale; 316 pgs stuck inactive; 93 pgs stuck stale; 764 pgs stuck
unclean



any advice ? I think that a not healthy cluster prevent me to run RGW
due to some timeout.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux