Re: Ceph and Openstack

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Fortunately Ceph Giant + OpenStack Juno works flawlessly for me.

If you have configured cinder / glance correctly , then after restarting  cinder and glance services , you should see something like this in cinder and glance logs.


Cinder logs : 

volume.log:2015-04-02 13:20:43.943 2085 INFO cinder.volume.manager [req-526cb14e-42ef-4c49-b033-e9bf2096be8f - - - - -] Starting volume driver RBDDriver (1.1.0)


Glance Logs:

api.log:2015-04-02 13:20:50.448 1266 DEBUG glance.common.config [-] glance_store.default_store     = rbd log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004
api.log:2015-04-02 13:20:50.449 1266 DEBUG glance.common.config [-] glance_store.rbd_store_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004
api.log:2015-04-02 13:20:50.449 1266 DEBUG glance.common.config [-] glance_store.rbd_store_chunk_size = 8 log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004
api.log:2015-04-02 13:20:50.449 1266 DEBUG glance.common.config [-] glance_store.rbd_store_pool    = images log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004
api.log:2015-04-02 13:20:50.449 1266 DEBUG glance.common.config [-] glance_store.rbd_store_user    = glance log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004
api.log:2015-04-02 13:20:50.451 1266 DEBUG glance.common.config [-] glance_store.stores            = ['rbd'] log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2004


If Cinder and Glance are able to initialize RBD driver , then everything should work like charm.


****************************************************************
Karan Singh 
Systems Specialist , Storage Platforms
CSC - IT Center for Science,
Keilaranta 14, P. O. Box 405, FIN-02101 Espoo, Finland
mobile: +358 503 812758
tel. +358 9 4572001
fax +358 9 4572302
http://www.csc.fi/
****************************************************************

On 02 Apr 2015, at 03:10, Erik McCormick <emccormick@xxxxxxxxxxxxxxx> wrote:

Can you both set Cinder and / or Glance logging to debug and provide some logs? There was an issue with the first Juno release of Glance in some vendor packages, so make sure you're fully updated to 2014.2.2

On Apr 1, 2015 7:12 PM, "Quentin Hartman" <qhartman@xxxxxxxxxxxxxxxxxxx> wrote:
I am conincidentally going through the same process right now. The best reference I've found is this: http://ceph.com/docs/master/rbd/rbd-openstack/

When I did Firefly / icehouse, this (seemingly) same guide Just Worked(tm), but now with Giant / Juno I'm running into similar trouble  to that which you describe. Everything _seems_ right, but creating volumes via openstack just sits and spins forever, never creating anything and (as far as i've found so far) not logging anything interesting. Normal Rados operations work fine.

Feel free to hit me up off list if you want to confer and then we can return here if we come up with anything to be shared with the group.

QH

On Wed, Apr 1, 2015 at 3:43 PM, Iain Geddes <iain.geddes@xxxxxxxxxxx> wrote:
All,

Apologies for my ignorance but I don't seem to be able to search an archive. 

I've spent a lot of time trying but am having difficulty in integrating Ceph (Giant) into Openstack (Juno). I don't appear to be recording any errors anywhere, but simply don't seem to be writing to the cluster if I try creating a new volume or importing an image. The cluster is good and I can create a static rbd mapping so I know the key components are in place. My problem is almost certainly finger trouble on my part but am completely lost and wondered if there was a well thumbed guide to integration?

Thanks


Iain



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux