Cinder volume creation issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,
        I need to know whether someone else also faced the same issue.


I tried openstack + ceph integration. I have seen that I could create volumes from horizon and it is created in rados.

When I check the created volumes in admin panel, all volumes are shown to be created in the same host.( I tried creating 10 volumes, but all are created in same host 'slave1') I I haven't changed crushmap and I am using the default one which came along with ceph-deploy.

nova-manage version
2013.2

host master {
        id -2           # do not change unnecessarily
        # weight 0.010
        alg straw
        hash 0  # rjenkins1
        item osd.0 weight 0.010
}
host slave1 {
        id -3           # do not change unnecessarily
        # weight 0.010
        alg straw
        hash 0  # rjenkins1
        item osd.1 weight 0.010
}
host slave2 {
        id -4           # do not change unnecessarily
        # weight 0.010
        alg straw
        hash 0  # rjenkins1
        item osd.2 weight 0.010
}
root default {
        id -1           # do not change unnecessarily
        # weight 0.030
        alg straw     # do not change bucket size (3) unnecessarily
        hash 0  # rjenkins1
        item master weight 0.010 pos 0
        item slave1 weight 0.010 pos 1
   
rule data {
        ruleset 0
        type replicated
        min_size 1
        max_size 10
        step take default
        step chooseleaf firstn 0 type host
        step emit
}

ceph osd dump


pool 0 'data' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0 crash_replay_interval 45
pool 1 'metadata' rep size 2 min_size 1 crush_ruleset 1 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0
pool 2 'rbd' rep size 2 min_size 1 crush_ruleset 2 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0
pool 3 'volumes' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 19 owner 0
pool 4 'images' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 21 owner 0


Second Issue
I am not able to attach volumes to instances if hosts differ. Eg: If volumes are created in host 'slave1' , instance1 is created in host 'master' and instance2 is created in host 'slave1',  I am able to attach volumes to instance2 but not to instance1.


Did someone face this issue in openstack with ceph ?.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux