pgs stuck undersized and degraded

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On a very small (3 node) cluster, I have one pool with a replication size of 3 that is showing some stuck PGs.
This pool has 64 pgs and the other pgs in the pool seem fine, mapped to 3 osds each.
And all the pgs in other pools are also fine.
Why would these pgs be stuck with 2 ?
The osd crush chooseleaf type is 1 for host and the osd tree is shown below.

-- Tom Deneau

pg_stat	state	up	up_primary	acting	acting_primary
--------------------------------------------------------------
13.34	active+undersized+degraded	[0,7]	0	[0,7]	0
13.3a	active+undersized+degraded	[2,8]	2	[2,8]	2
13.a	active+undersized+degraded	[8,2]	8	[8,2]	8
13.e	active+undersized+degraded	[0,8]	0	[0,8]	0
13.3c	active+undersized+degraded	[2,5]	2	[2,5]	2
13.22	active+undersized+degraded	[8,2]	8	[8,2]	8
13.1b	active+undersized+degraded	[2,8]	2	[2,8]	2
13.21	active+undersized+degraded	[8,0]	8	[8,0]	8
13.1e	active+undersized+degraded	[8,3]	8	[8,3]	8
13.1f	active+undersized+degraded	[4,6]	4	[4,6]	4
13.2a	active+remapped	       [7,4]	7	[7,4,0]	7
13.33	active+undersized+degraded	[7,2]	7	[7,2]	7
13.0	active+undersized+degraded	[0,7]	0	[0,7]	0

ID WEIGHT  TYPE NAME                  UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 4.94997 root default
-2 1.79999     host aus01
 0 0.45000         osd.0                   up  1.00000          1.00000
 2 0.45000         osd.2                   up  1.00000          1.00000
 3 0.45000         osd.3                   up  1.00000          1.00000
 4 0.45000         osd.4                   up  1.00000          1.00000
-3 1.79999     host aus05
 5 0.45000         osd.5                   up  1.00000          1.00000
 6 0.45000         osd.6                   up  1.00000          1.00000
 7 0.45000         osd.7                   up  1.00000          1.00000
 8 0.45000         osd.8                   up  1.00000          1.00000
-4 1.34999     host aus06
 9 0.45000         osd.9                   up  1.00000          1.00000
10 0.45000         osd.10                  up  1.00000          1.00000
11 0.45000         osd.11                down        0          1.00000
 1       0 osd.1                         down        0          1.00000

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux