Hi,
One more step debugging this issue (hypervisor/nova-compute node is XEN 4.4.2):
I think the problem is that libvirt is not getting the correct user or credentials tu access pool, on instance qemu log i see:
xen be: qdisk-51760: error: Could not open 'volumes/volume-4d26bb31-91e8-4646-8010-82127b775c8e': No such file or directory
xen be: qdisk-51760: initialise() failed
xen be: qdisk-51760: error: Could not open 'volumes/volume-4d26bb31-91e8-4646-8010-82127b775c8e': No such file or directory
xen be: qdisk-51760: initialise() failed
xen be: qdisk-51760: error: Could not open 'volumes/volume-4d26bb31-91e8-4646-8010-82127b775c8e': No such file or directory
But using the user cinder on pool volumes :
rbd ls -p volumes --id cinder
test
volume-4d26bb31-91e8-4646-8010-82127b775c8e
volume-5e2ab5c2-4710-4c28-9755-b5bc4ff6a52a
volume-7da08f12-fb0f-4269-931a-d528c1507fee
Using:
qemu-img info -f rbd rbd:volumes/test
Does not work, but using directly the user cinder and the ceph.conf file works fine:
qemu-img info -f rbd rbd:volumes/test:id=cinder:conf=/etc/ceph/ceph.conf
I think nova.conf is set correctly (section libvirt):
images_rbd_pool = volumes
images_rbd_ceph_conf = /etc/ceph/ceph.conf
hw_disk_discard=unmap
rbd_user = cinder
rbd_secret_uuid = 67a6d4a1-e53a-42c7-9bc9-XXXXXXXXXXXX
And looking at libvirt:
# virsh secret-list
setlocale: No such file or directory
UUID Usage
--------------------------------------------------------------------------------
67a6d4a1-e53a-42c7-9bc9-XXXXXXXXXXXX ceph client.cinder secret
virsh secret-get-value 67a6d4a1-e53a-42c7-9bc9-XXXXXXXXXXXX
setlocale: No such file or directory
AQAonAdWS3iMJxxxxxxj9iErv001a0k+vyFdUg==
cat /etc/ceph/ceph.client.cinder.keyring
[client.cinder]
key = AQAonAdWS3iMJxxxxxxj9iErv001a0k+vyFdUg==
Any idea will be welcomed.
regards, I
2015-11-04 10:51 GMT+01:00 Iban Cabrillo <cabrillo@xxxxxxxxxxxxxx>:
Dear Cephers,I still can attach volume to my cloud machines, ceph version is 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43) and Openstack JunoNova+cinder are able to create volumes on Cephcephvolume:~ # rados ls --pool volumesrbd_header.1f7784a9e1c2erbd_id.volume-5e2ab5c2-4710-4c28-9755-b5bc4ff6a52arbd_directoryrbd_id.volume-7da08f12-fb0f-4269-931a-d528c1507feerbd_header.23d5e33b4c15crbd_id.volume-4d26bb31-91e8-4646-8010-82127b775c8erbd_header.20407190ce77fcloud:~ # cinder list+--------------------------------------+--------+--------------+------+-------------+----------+------------------------------------------------------------------------------------------+| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------|---------------------------------------------------------+| 4d26bb31-91e8-4646-8010-82127b775c8e | in-use | None | 2 | rbd | false | 59aa021e-bb4c-4154-9b18-9d09f5fd3aeb |+--------------------------------------+--------+--------------+------+-------------+----------+------------------------------------------------------------------------------------------+nova:~ # nova volume-attach 59aa021e-bb4c-4154-9b18-9d09f5fd3aeb 4d26bb31-91e8-4646-8010-82127b775c8e auto+----------+------------------------------------------------------------+| Property | Value |+----------+------------------------------------------------------------+| device | /dev/xvdd || id | 4d26bb31-91e8-4646-8010-82127b775c8e || serverId | 59aa021e-bb4c-4154-9b18-9d09f5fd3aeb || volumeId | 4d26bb31-91e8-4646-8010-82127b775c8e |+----------+--------------------------------------+From nova-compute (Ubuntu 14.04 LTS \n \l) node I see the attaching/detaching:cloud01:~ # dpkg -l | grep cephii ceph-common 0.94.5-1trusty amd64 common utilities to mount and interact with a ceph storage clusterii libcephfs1 0.94.5-1trusty amd64 Ceph distributed file system client libraryii python-cephfs 0.94.5-1trusty amd64 Python libraries for the Ceph libcephfs libraryii librbd1 0.94.5-1trusty amd64 RADOS block device client libraryii python-rbd 0.94.5-1trusty amd64 Python libraries for the Ceph librbd libraryat cinder.confrbd_user = cinderrbd_secret_uuid = 67a6d4a1-e53a-42c7-9bc9-xxxxxxxxxxx[rbd-cephvolume]volume_backend_name=rbdvolume_driver = cinder.volume.drivers.rbd.RBDDriverrbd_pool = volumesrbd_ceph_conf = /etc/ceph/ceph.confrbd_flatten_volume_from_snapshot = falserbd_max_clone_depth = 5rbd_store_chunk_size = 4rados_connect_timeout = -1glance_api_version = 2in nova.confrbd_user=cinder# The libvirt UUID of the secret for the rbd_uservolumes# (string value)rbd_secret_uuid=67a6d4a1-e53a-42c7-9bc9-xxxxxxxxxxximages_rbd_pool=volumes# Path to the ceph configuration file to use (string value)images_rbd_ceph_conf=/etc/ceph/ceph.confls -la /etc/libvirt/secretstotal 16drwx------ 2 root root 4096 Nov 4 10:28 .drwxr-xr-x 7 root root 4096 Oct 22 13:15 ..-rw------- 1 root root 40 Nov 4 10:28 67a6d4a1-e53a-42c7-9bc9-xxxxxxxxx.base64-rw------- 1 root root 170 Nov 4 10:25 67a6d4a1-e53a-42c7-9bc9-xxxxxxxxxx.xml2015-11-04 10:39:42.573 11653 INFO nova.compute.manager [req-8b2a9793-4b39-4cb0-b291-e492c350387e b7aadbb4a85745feb498b74e437129cc ce2dd2951bd24c1ea3b43c3b3716f604 - - -] [instance: 59aa021e-bb4c-4154-9b18-9d09f5fd3aeb] Detach volume 4d26bb31-91e8-4646-8010-82127b775c8e from mountpoint /dev/xvdd2015-11-04 10:40:43.266 11653 INFO nova.compute.manager [req-35218de0-3f26-496b-aad9-5c839143da17 b7aadbb4a85745feb498b74e437129cc ce2dd2951bd24c1ea3b43c3b3716f604 - - -] [instance: 59aa021e-bb4c-4154-9b18-9d09f5fd3aeb] Attaching volume 4d26bb31-91e8-4646-8010-82127b775c8e to /dev/xvddbut one on cloud machine (SL6) the volume y never showed (xvdd).[root@cloud5 ~]# cat /proc/partitionsmajor minor #blocks name202 0 20971520 xvda202 16 209715200 xvdb202 32 10485760 xvdcThanks in advance, I--2015-11-03 11:18 GMT+01:00 Iban Cabrillo <cabrillo@xxxxxxxxxxxxxx>:Hi all,During last week I been trying to deploy the pre-existing ceph cluster with out openstack intance.The ceph-cinder integration was easy (or at least I think so!!)There is only one volume to attach block storage to out cloud machines.The client.cinder has permission on this volume (following the guides)...............client.cinderkey: AQAonXXXXXXXRAAPIAj9iErv001a0k+vyFdUg==caps: [mon] allow rcaps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=volumesceph.conf file seems to be OK:[global]fsid = 6f5a65a7-316c-4825-afcb-428608941dd1mon_initial_members = cephadm, cephmon02, cephmon03mon_host = 10.10.3.1,10.10.3.2,10.10.3.3auth_cluster_required = cephxauth_service_required = cephxauth_client_required = cephxfilestore_xattr_use_omap = trueosd_pool_default_size = 2public_network = 10.10.0.0/16cluster_network = 192.168.254.0/27[osd]osd_journal_size = 20000[client.cinder]keyring = /etc/ceph/ceph.client.cinder.keyring[client]rbd cache = truerbd cache writethrough until flush = trueadmin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asokThe trouble seems that blocks are created using the client.admin instead of client.cinderFrom cinder machine:cinder:~ # rados ls --pool volumesrbd_id.volume-5e2ab5c2-4710-4c28-9755-b5bc4ff6a52arbd_directoryrbd_id.volume-7da08f12-fb0f-4269-931a-d528c1507feerbd_header.23d5e33b4c15crbd_header.20407190ce77fBut if I try to look for using cinder client:cinder:~ #rados ls --pool volumes --secret client.cinder"empty answer"cinder:~ # ls -la /etc/cephtotal 24drwxr-xr-x 2 root root 4096 nov 3 10:17 .drwxr-xr-x 108 root root 4096 oct 29 09:52 ..-rw------- 1 root root 63 nov 3 10:17 ceph.client.admin.keyring-rw-r--r-- 1 cinder cinder 67 oct 28 13:44 ceph.client.cinder.keyring-rw-r--r-- 1 root root 454 oct 1 13:56 ceph.conf-rw-r--r-- 1 root root 73 sep 27 09:36 ceph.mon.keyringfrom a client (I have supposed that this machine only need the cinder key...)cloud28:~ # ls -la /etc/ceph/total 28drwx------ 2 root root 4096 nov 3 11:01 .drwxr-xr-x 116 root root 12288 oct 30 14:37 ..-rw-r--r-- 1 nova nova 67 oct 28 11:43 ceph.client.cinder.keyring-rw-r--r-- 1 root root 588 nov 3 10:59 ceph.conf-rw-r--r-- 1 root root 92 oct 26 16:59 rbdmapcloud28:~ # rbd -p volumes ls2015-11-03 11:01:58.782795 7fc6c714b840 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication2015-11-03 11:01:58.782800 7fc6c714b840 0 librados: client.admin initialization error (2) No such file or directoryrbd: couldn't connect to the cluster!Any help will be welcome.############################################################################
Iban Cabrillo Bartolome
Instituto de Fisica de Cantabria (IFCA)
Santander, Spain
Tel: +34942200969PGP PUBLIC KEY: http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC
############################################################################
Bertrand Russell:
"El problema con el mundo es que los estúpidos están seguros de todo y los inteligentes están llenos de dudas"
############################################################################
Iban Cabrillo Bartolome
Instituto de Fisica de Cantabria (IFCA)
Santander, Spain
Tel: +34942200969
Iban Cabrillo Bartolome
Instituto de Fisica de Cantabria (IFCA)
Santander, Spain
Tel: +34942200969
PGP PUBLIC KEY: http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC
############################################################################
Bertrand Russell:
"El problema con el mundo es que los estúpidos están seguros de todo y los inteligentes están llenos de dudas"
############################################################################
Bertrand Russell:
"El problema con el mundo es que los estúpidos están seguros de todo y los inteligentes están llenos de dudas"
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com