PG active+remapped even I have three hosts

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello!

To me it looks like you have one osd on host Ceph-Stress-02 and therefore only a weight of 1 on that host and 7 on the other. If you want of three replicas on only three hosts you need about the same storage space on all hosts.




On Wed, Mar 8, 2017 at 4:50 AM +0100, "TYLin" <wooertim at gmail.com<mailto:wooertim at gmail.com>> wrote:


Hi all,

We got 4 PG active+remapped in our cluster. We set the pool?s  ruleset to ruleset 0 and got HEALTH_OK. After we set the ruleset to ruleset 1, 4 pg is active+remapped. The testing result from crushtool also shows some bad mapping exists. Anyone happened to know the reason?



pool 0 'rbd' replicated size 3 min_size 2 crush_ruleset 1 object_hash rjenkins pg_num 256 pgp_num 256 last_change 421 flags hashpspool stripe_width 0

[root at Ceph-Stress-01 ~]# ceph pg ls remapped
PG_STAT OBJECTS MISSING_ON_PRIMARY DEGRADED MISPLACED UNFOUND BYTES LOG DISK_LOG STATE           STATE_STAMP                VERSION REPORTED UP     UP_PRIMARY ACTING    ACTING_PRIMARY LAST_SCRUB SCRUB_STAMP                LAST_DEEP_SCRUB DEEP_SCRUB_STAMP
0.33          0                  0        0         0       0     0   0        0 active+remapped 2017-03-07 22:53:02.453447     0'0  419:110 [3,22]          3  [3,22,4]              3        0'0 2017-03-07 13:29:32.110280             0'0 2017-03-07 13:29:32.110280
0.3b          0                  0        0         0       0     0   0        0 active+remapped 2017-03-07 22:53:02.619526     0'0  419:110 [2,20]          2 [2,20,17]              2        0'0 2017-03-07 13:29:32.110287             0'0 2017-03-07 13:29:32.110287
0.49          0                  0        0         0       0     0   0        0 active+remapped 2017-03-07 22:53:02.453239     0'0  419:104 [4,20]          4 [4,20,19]              4        0'0 2017-03-07 13:29:32.110257             0'0 2017-03-07 13:29:32.110257
0.54          0                  0        0         0       0     0   0        0 active+remapped 2017-03-07 22:53:02.101725     0'0  419:207 [19,3]         19 [19,3,20]             19        0'0 2017-03-07 13:29:32.110262             0'0 2017-03-07 13:29:32.110262


rule replicated_ruleset_one_host {
        ruleset 0
        type replicated
        min_size 1
        max_size 10
        step take default
        step choose firstn 0 type osd
        step emit
}
rule replicated_ruleset_multi_host {
        ruleset 1
        type replicated
        min_size 1
        max_size 10
        step take default
        step chooseleaf firstn 0 type host
        step emit
}

ID WEIGHT   TYPE NAME               UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 15.00000 root default
-2  7.00000     host Ceph-Stress-01
 0  1.00000         osd.0                up  1.00000          1.00000
 1  1.00000         osd.1                up  1.00000          1.00000
 2  1.00000         osd.2                up  1.00000          1.00000
 3  1.00000         osd.3                up  1.00000          1.00000
 4  1.00000         osd.4                up  1.00000          1.00000
 5  1.00000         osd.5                up  1.00000          1.00000
 6  1.00000         osd.6                up  1.00000          1.00000
-4  7.00000     host Ceph-Stress-03
16  1.00000         osd.16               up  1.00000          1.00000
17  1.00000         osd.17               up  1.00000          1.00000
18  1.00000         osd.18               up  1.00000          1.00000
19  1.00000         osd.19               up  1.00000          1.00000
20  1.00000         osd.20               up  1.00000          1.00000
21  1.00000         osd.21               up  1.00000          1.00000
22  1.00000         osd.22               up  1.00000          1.00000
-3  1.00000     host Ceph-Stress-02
 7  1.00000         osd.7                up  1.00000          1.00000


[root at Ceph-Stress-02 ~]# crushtool --test -i crushmap --rule 1 --min-x 1 --max-x 5 --num-rep 3 --show-utilization --show-mappings --show-bad-mappings
rule 1 (replicated_ruleset_multi_host), x = 1..5, numrep = 3..3
CRUSH rule 1 x 1 [5,7,21]
CRUSH rule 1 x 2 [17,6]
bad mapping rule 1 x 2 num_rep 3 result [17,6]
CRUSH rule 1 x 3 [19,7,1]
CRUSH rule 1 x 4 [5,22,7]
CRUSH rule 1 x 5 [21,0]
bad mapping rule 1 x 5 num_rep 3 result [21,0]
rule 1 (replicated_ruleset_multi_host) num_rep 3 result size == 2:      2/5
rule 1 (replicated_ruleset_multi_host) num_rep 3 result size == 3:      3/5
  device 0:              stored : 1      expected : 1
  device 1:              stored : 1      expected : 1
  device 5:              stored : 2      expected : 1
  device 6:              stored : 1      expected : 1
  device 7:              stored : 3      expected : 1
  device 17:             stored : 1      expected : 1
  device 19:             stored : 1      expected : 1
  device 21:             stored : 2      expected : 1
  device 22:             stored : 1      expected : 1

Thanks,
Tim
_______________________________________________
ceph-users mailing list
ceph-users at lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20170308/023b00ae/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux