Hi Vasilly,
Thanks, but I still see the same error:
cinder.conf (of course I just restart the cinder-volume service)
# default volume type to use (string value)
[rbd-cephvolume]
rbd_user = cinder
rbd_secret_uuid = 67a6d4a1-e53a-42c7-9bc9-xxxxxxxxxxx
volume_backend_name=rbd
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
xen be: qdisk-51760: error: Could not open 'volumes/volume-4d26bb31-91e8-4646-8010-82127b775c8e': No such file or directory
xen be: qdisk-51760: initialise() failed
xen be: qdisk-51760: error: Could not open 'volumes/volume-4d26bb31-91e8-4646-8010-82127b775c8e': No such file or directory
xen be: qdisk-51760: initialise() failed
xen be: qdisk-51760: error: Could not open 'volumes/volume-4d26bb31-91e8-4646-8010-82127b775c8e': No such file or directory
xen be: qdisk-51760: initialise() failed
xen be: qdisk-51760: error: Could not open 'volumes/volume-4d26bb31-91e8-4646-8010-82127b775c8e': No such file or directory
xen be: qdisk-51760: initialise() failed
xen be: qdisk-51760: error: Could not open 'volumes/volume-4d26bb31-91e8-4646-8010-82127b775c8e': No such file or directory
xen be: qdisk-51760: initialise() failed
xen be: qdisk-51760: error: Could not open 'volumes/volume-4d26bb31-91e8-4646-8010-82127b775c8e': No such file or directory
xen be: qdisk-51760: initialise() failed
Regards, I
2015-11-06 13:00 GMT+01:00 Vasiliy Angapov <angapov@xxxxxxxxx>:
At cinder.conf you should place this options:
rbd_user = cinder
rbd_secret_uuid = 67a6d4a1-e53a-42c7-9bc9-xxxxxxxxxxx
to [rbd-cephvolume] section instead of DEFAULT.
> _______________________________________________
2015-11-06 19:45 GMT+08:00 Iban Cabrillo <cabrillo@xxxxxxxxxxxxxx>:
> Hi,
> One more step debugging this issue (hypervisor/nova-compute node is XEN
> 4.4.2):
>
> I think the problem is that libvirt is not getting the correct user or
> credentials tu access pool, on instance qemu log i see:
>
> xen be: qdisk-51760: error: Could not open
> 'volumes/volume-4d26bb31-91e8-4646-8010-82127b775c8e': No such file or
> directory
> xen be: qdisk-51760: initialise() failed
> xen be: qdisk-51760: error: Could not open
> 'volumes/volume-4d26bb31-91e8-4646-8010-82127b775c8e': No such file or
> directory
> xen be: qdisk-51760: initialise() failed
> xen be: qdisk-51760: error: Could not open
> 'volumes/volume-4d26bb31-91e8-4646-8010-82127b775c8e': No such file or
> directory
>
> But using the user cinder on pool volumes :
>
> rbd ls -p volumes --id cinder
> test
> volume-4d26bb31-91e8-4646-8010-82127b775c8e
> volume-5e2ab5c2-4710-4c28-9755-b5bc4ff6a52a
> volume-7da08f12-fb0f-4269-931a-d528c1507fee
>
> Using:
> qemu-img info -f rbd rbd:volumes/test
> Does not work, but using directly the user cinder and the ceph.conf file
> works fine:
>
> qemu-img info -f rbd rbd:volumes/test:id=cinder:conf=/etc/ceph/ceph.conf
>
> I think nova.conf is set correctly (section libvirt):
> images_rbd_pool = volumes
> images_rbd_ceph_conf = /etc/ceph/ceph.conf
> hw_disk_discard=unmap
> rbd_user = cinder
> rbd_secret_uuid = 67a6d4a1-e53a-42c7-9bc9-XXXXXXXXXXXX
>
> And looking at libvirt:
>
> # virsh secret-list
> setlocale: No such file or directory
> UUID Usage
> --------------------------------------------------------------------------------
> 67a6d4a1-e53a-42c7-9bc9-XXXXXXXXXXXX ceph client.cinder secret
>
>
> virsh secret-get-value 67a6d4a1-e53a-42c7-9bc9-XXXXXXXXXXXX
> setlocale: No such file or directory
> AQAonAdWS3iMJxxxxxxj9iErv001a0k+vyFdUg==
> cat /etc/ceph/ceph.client.cinder.keyring
> [client.cinder]
> key = AQAonAdWS3iMJxxxxxxj9iErv001a0k+vyFdUg==
>
>
> Any idea will be welcomed.
> regards, I
>
> 2015-11-04 10:51 GMT+01:00 Iban Cabrillo <cabrillo@xxxxxxxxxxxxxx>:
>>
>> Dear Cephers,
>>
>> I still can attach volume to my cloud machines, ceph version is 0.94.5
>> (9764da52395923e0b32908d83a9f7304401fee43) and Openstack Juno
>>
>> Nova+cinder are able to create volumes on Ceph
>> cephvolume:~ # rados ls --pool volumes
>> rbd_header.1f7784a9e1c2e
>> rbd_id.volume-5e2ab5c2-4710-4c28-9755-b5bc4ff6a52a
>> rbd_directory
>> rbd_id.volume-7da08f12-fb0f-4269-931a-d528c1507fee
>> rbd_header.23d5e33b4c15c
>> rbd_id.volume-4d26bb31-91e8-4646-8010-82127b775c8e
>> rbd_header.20407190ce77f
>>
>> cloud:~ # cinder list
>>
>> +--------------------------------------+--------+--------------+------+-------------+----------+------------------------------------------------------------------------------------------+
>> | ID |
>> Status | Display Name | Size | Volume Type | Bootable |
>> Attached to |
>>
>> +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------|---------------------------------------------------------+
>> | 4d26bb31-91e8-4646-8010-82127b775c8e | in-use | None | 2
>> | rbd | false | 59aa021e-bb4c-4154-9b18-9d09f5fd3aeb
>> |
>>
>> +--------------------------------------+--------+--------------+------+-------------+----------+------------------------------------------------------------------------------------------+
>>
>>
>> nova:~ # nova volume-attach 59aa021e-bb4c-4154-9b18-9d09f5fd3aeb
>> 4d26bb31-91e8-4646-8010-82127b775c8e auto
>> +----------+------------------------------------------------------------+
>> | Property | Value
>> |
>> +----------+------------------------------------------------------------+
>> | device | /dev/xvdd
>> |
>> | id | 4d26bb31-91e8-4646-8010-82127b775c8e |
>> | serverId | 59aa021e-bb4c-4154-9b18-9d09f5fd3aeb |
>> | volumeId | 4d26bb31-91e8-4646-8010-82127b775c8e |
>> +----------+--------------------------------------+
>>
>> From nova-compute (Ubuntu 14.04 LTS \n \l) node I see the
>> attaching/detaching:
>> cloud01:~ # dpkg -l | grep ceph
>> ii ceph-common 0.94.5-1trusty
>> amd64 common utilities to mount and interact with a ceph storage
>> cluster
>> ii libcephfs1 0.94.5-1trusty
>> amd64 Ceph distributed file system client library
>> ii python-cephfs 0.94.5-1trusty
>> amd64 Python libraries for the Ceph libcephfs library
>> ii librbd1 0.94.5-1trusty
>> amd64 RADOS block device client library
>> ii python-rbd 0.94.5-1trusty
>> amd64 Python libraries for the Ceph librbd library
>>
>> at cinder.conf
>>
>> rbd_user = cinder
>> rbd_secret_uuid = 67a6d4a1-e53a-42c7-9bc9-xxxxxxxxxxx
>>
>> [rbd-cephvolume]
>> volume_backend_name=rbd
>> volume_driver = cinder.volume.drivers.rbd.RBDDriver
>> rbd_pool = volumes
>> rbd_ceph_conf = /etc/ceph/ceph.conf
>> rbd_flatten_volume_from_snapshot = false
>> rbd_max_clone_depth = 5
>> rbd_store_chunk_size = 4
>> rados_connect_timeout = -1
>> glance_api_version = 2
>>
>> in nova.conf
>> rbd_user=cinder
>>
>> # The libvirt UUID of the secret for the rbd_uservolumes
>> # (string value)
>> rbd_secret_uuid=67a6d4a1-e53a-42c7-9bc9-xxxxxxxxxxx
>>
>> images_rbd_pool=volumes
>>
>> # Path to the ceph configuration file to use (string value)
>> images_rbd_ceph_conf=/etc/ceph/ceph.conf
>>
>> ls -la /etc/libvirt/secrets
>> total 16
>> drwx------ 2 root root 4096 Nov 4 10:28 .
>> drwxr-xr-x 7 root root 4096 Oct 22 13:15 ..
>> -rw------- 1 root root 40 Nov 4 10:28
>> 67a6d4a1-e53a-42c7-9bc9-xxxxxxxxx.base64
>> -rw------- 1 root root 170 Nov 4 10:25
>> 67a6d4a1-e53a-42c7-9bc9-xxxxxxxxxx.xml
>>
>>
>>
>> 2015-11-04 10:39:42.573 11653 INFO nova.compute.manager
>> [req-8b2a9793-4b39-4cb0-b291-e492c350387e b7aadbb4a85745feb498b74e437129cc
>> ce2dd2951bd24c1ea3b43c3b3716f604 - - -] [instance:
>> 59aa021e-bb4c-4154-9b18-9d09f5fd3aeb] Detach volume
>> 4d26bb31-91e8-4646-8010-82127b775c8e from mountpoint /dev/xvdd
>> 2015-11-04 10:40:43.266 11653 INFO nova.compute.manager
>> [req-35218de0-3f26-496b-aad9-5c839143da17 b7aadbb4a85745feb498b74e437129cc
>> ce2dd2951bd24c1ea3b43c3b3716f604 - - -] [instance:
>> 59aa021e-bb4c-4154-9b18-9d09f5fd3aeb] Attaching volume
>> 4d26bb31-91e8-4646-8010-82127b775c8e to /dev/xvdd
>>
>> but one on cloud machine (SL6) the volume y never showed (xvdd).
>> [root@cloud5 ~]# cat /proc/partitions
>> major minor #blocks name
>>
>> 202 0 20971520 xvda
>> 202 16 209715200 xvdb
>> 202 32 10485760 xvdc
>>
>> Thanks in advance, I
>>
>> 2015-11-03 11:18 GMT+01:00 Iban Cabrillo <cabrillo@xxxxxxxxxxxxxx>:
>>>
>>> Hi all,
>>> During last week I been trying to deploy the pre-existing ceph
>>> cluster with out openstack intance.
>>> The ceph-cinder integration was easy (or at least I think so!!)
>>> There is only one volume to attach block storage to out cloud
>>> machines.
>>>
>>> The client.cinder has permission on this volume (following the
>>> guides)
>>> ...............
>>> client.cinder
>>> key: AQAonXXXXXXXRAAPIAj9iErv001a0k+vyFdUg==
>>> caps: [mon] allow r
>>> caps: [osd] allow class-read object_prefix rbd_children, allow rwx
>>> pool=volumes
>>>
>>> ceph.conf file seems to be OK:
>>>
>>> [global]
>>> fsid = 6f5a65a7-316c-4825-afcb-428608941dd1
>>> mon_initial_members = cephadm, cephmon02, cephmon03
>>> mon_host = 10.10.3.1,10.10.3.2,10.10.3.3
>>> auth_cluster_required = cephx
>>> auth_service_required = cephx
>>> auth_client_required = cephx
>>> filestore_xattr_use_omap = true
>>> osd_pool_default_size = 2
>>> public_network = 10.10.0.0/16
>>> cluster_network = 192.168.254.0/27
>>>
>>> [osd]
>>> osd_journal_size = 20000
>>>
>>> [client.cinder]
>>> keyring = /etc/ceph/ceph.client.cinder.keyring
>>>
>>> [client]
>>> rbd cache = true
>>> rbd cache writethrough until flush = true
>>> admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok
>>>
>>>
>>> The trouble seems that blocks are created using the client.admin instead
>>> of client.cinder
>>>
>>> From cinder machine:
>>>
>>> cinder:~ # rados ls --pool volumes
>>> rbd_id.volume-5e2ab5c2-4710-4c28-9755-b5bc4ff6a52a
>>> rbd_directory
>>> rbd_id.volume-7da08f12-fb0f-4269-931a-d528c1507fee
>>> rbd_header.23d5e33b4c15c
>>> rbd_header.20407190ce77f
>>>
>>> But if I try to look for using cinder client:
>>>
>>>
>>> cinder:~ #rados ls --pool volumes --secret client.cinder
>>> "empty answer"
>>>
>>> cinder:~ # ls -la /etc/ceph
>>> total 24
>>> drwxr-xr-x 2 root root 4096 nov 3 10:17 .
>>> drwxr-xr-x 108 root root 4096 oct 29 09:52 ..
>>> -rw------- 1 root root 63 nov 3 10:17 ceph.client.admin.keyring
>>> -rw-r--r-- 1 cinder cinder 67 oct 28 13:44 ceph.client.cinder.keyring
>>> -rw-r--r-- 1 root root 454 oct 1 13:56 ceph.conf
>>> -rw-r--r-- 1 root root 73 sep 27 09:36 ceph.mon.keyring
>>>
>>>
>>> from a client (I have supposed that this machine only need the cinder
>>> key...)
>>>
>>> cloud28:~ # ls -la /etc/ceph/
>>> total 28
>>> drwx------ 2 root root 4096 nov 3 11:01 .
>>> drwxr-xr-x 116 root root 12288 oct 30 14:37 ..
>>> -rw-r--r-- 1 nova nova 67 oct 28 11:43 ceph.client.cinder.keyring
>>> -rw-r--r-- 1 root root 588 nov 3 10:59 ceph.conf
>>> -rw-r--r-- 1 root root 92 oct 26 16:59 rbdmap
>>>
>>> cloud28:~ # rbd -p volumes ls
>>> 2015-11-03 11:01:58.782795 7fc6c714b840 -1 monclient(hunting): ERROR:
>>> missing keyring, cannot use cephx for authentication
>>> 2015-11-03 11:01:58.782800 7fc6c714b840 0 librados: client.admin
>>> initialization error (2) No such file or directory
>>> rbd: couldn't connect to the cluster!
>>>
>>> Any help will be welcome.
>>>
>>
>>
>>
>> --
>>
>> ############################################################################
>> Iban Cabrillo Bartolome
>> Instituto de Fisica de Cantabria (IFCA)
>> Santander, Spain
>> Tel: +34942200969
>> PGP PUBLIC KEY:
>> http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC
>>
>> ############################################################################
>> Bertrand Russell:
>> "El problema con el mundo es que los estúpidos están seguros de todo y los
>> inteligentes están llenos de dudas"
>
>
>
>
> --
> ############################################################################
> Iban Cabrillo Bartolome
> Instituto de Fisica de Cantabria (IFCA)
> Santander, Spain
> Tel: +34942200969
> PGP PUBLIC KEY:
> http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC
> ############################################################################
> Bertrand Russell:
> "El problema con el mundo es que los estúpidos están seguros de todo y los
> inteligentes están llenos de dudas"
>
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
############################################################################
Iban Cabrillo Bartolome
Instituto de Fisica de Cantabria (IFCA)
Santander, Spain
Tel: +34942200969
Iban Cabrillo Bartolome
Instituto de Fisica de Cantabria (IFCA)
Santander, Spain
Tel: +34942200969
PGP PUBLIC KEY: http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC
############################################################################
Bertrand Russell:
"El problema con el mundo es que los estúpidos están seguros de todo y los inteligentes están llenos de dudas"
############################################################################
Bertrand Russell:
"El problema con el mundo es que los estúpidos están seguros de todo y los inteligentes están llenos de dudas"
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com