rbd image creation command hangs in Jewel 10.2.2 (CentOS 7.2) on AWS Environment

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



rbd image creation command hangs.

1. Configured ceph cluster on AWS environment, CEPH Jewel Ver.10.2.2 on CentOS Linux release 7.2.1511 (Core).
2. Created 6 OSDs (SSDs Drives) on each OSD Node (3 OSD Nodes). Total 24 OSDs across cluster.
3. Edited crush map by picking each SSD disk from each OSD Node. and written a rule.
4. Created a pool with specific crush rule.
5. while creating rbd image on created pool with specific crush rule. rbd image command hangs and no further operations happening.

Note: Able to create an rbd image with default rule set.

Crush Map Details:
===============
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable straw_calc_version 1

# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2
:
:
:
device 22 osd.22
device 23 osd.23
# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root

# buckets
host siteAosd {
        id -2           # do not change unnecessarily
        # weight 3.793
        alg straw
        hash 0  # rjenkins1
        item osd.0 weight 0.632
        item osd.1 weight 0.632
        item osd.2 weight 0.632
        item osd.3 weight 0.632
        item osd.4 weight 0.632
        item osd.5 weight 0.632
}
host siteBosd {
        id -3           # do not change unnecessarily
        # weight 3.793
        alg straw
        hash 0  # rjenkins1
        item osd.6 weight 0.632
        item osd.7 weight 0.632
        item osd.8 weight 0.632
        item osd.9 weight 0.632
        item osd.14 weight 0.632
        item osd.15 weight 0.632
}
host siteCosd1 {
        id -4           # do not change unnecessarily
        # weight 3.793
        alg straw
        hash 0  # rjenkins1
        item osd.10 weight 0.632
        item osd.11 weight 0.632
        item osd.12 weight 0.632
        item osd.13 weight 0.632
        item osd.16 weight 0.632
        item osd.17 weight 0.632
}
host siteCosd2 {
        id -5           # do not change unnecessarily
        # weight 3.793
        alg straw
        hash 0  # rjenkins1
        item osd.18 weight 0.632
        item osd.19 weight 0.632
        item osd.20 weight 0.632
        item osd.21 weight 0.632
        item osd.22 weight 0.632
        item osd.23 weight 0.632
}
root default {
        id -1           # do not change unnecessarily
        # weight 15.170
        alg straw
        hash 0  # rjenkins1
        item siteAosd weight 3.793
        item siteBosd weight 3.793
        item siteCosd1 weight 3.793
        item siteCosd2 weight 3.793
}
root test {
        id -11          # do not change unnecessarily
        # weight 0.000
        alg straw
        hash 0  # rjenkins1
        item osd.5 weight 0.632
        item osd.15 weight 0.632
        item osd.17 weight 0.632
        item osd.23 weight 0.632
}
rule test1 {
        ruleset 5
        type replicated
        min_size 1
        max_size 10
        step take sds4
        step chooseleaf firstn 1 type host
        step emit
}

A. Creating a rbd image with default ruleset:

[sdsuser@admin cluster]$ ceph osd pool create pool-a 256 256
pool 'pool-a' created
[sdsuser@admin cluster]$ ceph osd dump | grep pool-a
pool 30 'pool-a' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 256 pgp_num 256 last_change 923 flags hashpspool stripe_width 0
[sdsuser@admin cluster]$ rbd create --image pool-a/image-a --size 5G --image-feature layering
[sdsuser@admin cluster]$ rbd -p pool-a ls
image-a

B. Creating a rbd image with specific crush ruleset.

[sdsuser@admin cluster]$ ceph osd pool set pool-a crush_ruleset 5
set pool 30 crush_ruleset to 5
[sdsuser@admin cluster]$ ceph osd dump | grep pool-a
pool 30 'pool-a' replicated size 2 min_size 1 crush_ruleset 5 object_hash rjenkins pg_num 256 pgp_num 256 last_change 927 flags hashpspool stripe_width 0
[sdsuser@admin cluster]$ rbd create --image pool-a/image-b --size 5G --image-feature layering

Here the command hangs at this stage. No further operations happening either with error/success message not seen.
"The above issue was observed in CentOS 7.2 with specific crush rule set not with default crush rule on AWS environment."

Whereas the same procedure above at my local setup. with same CEPH Jewel Ver.10.2.2 on RHEL 7.2 (maipo)
able to create rbd image without any issues on specific crush rule set.

What could be the issues? please any reasons behind this issue?

--Rakesh Parkiti


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux