Ceph Cinder Capabilities reports wrong free size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jens,

There's a bug in cinder that causes, at least, to get size wrong from 
cinder. If you search a little bit you will find it. I think it's still 
not solved.

El 21/08/14 a las #4, Jens-Christian Fischer escribi?:
> I am working with Cinder Multi Backends on an Icehouse installation and have added another backend (Quobyte) to a previously running Cinder/Ceph installation.
>
> I can now create QuoByte volumes, but no longer any ceph volumes. The cinder-scheduler log get?s an incorrect number for the free size of the volumes pool and disregards the RBD backend as a viable storage system:
>
> 2014-08-21 16:42:49.847 1469 DEBUG cinder.openstack.common.scheduler.filters.capabilities_filter [r...] extra_spec requirement 'rbd' does not match 'quobyte' _satisfies_extra_specs /usr/lib/python2.7/dist-packages/cinder/openstack/common/scheduler/filters/capabilities_filter.py:55
> 2014-08-21 16:42:49.848 1469 DEBUG cinder.openstack.common.scheduler.filters.capabilities_filter [r...] host 'controller at quobyte': free_capacity_gb: 156395.931061 fails resource_type extra_specs requirements host_passes /usr/lib/python2.7/dist-packages/cinder/openstack/common/scheduler/filters/capabilities_filter.py:68
> 2014-08-21 16:42:49.848 1469 WARNING cinder.scheduler.filters.capacity_filter [r...-] Insufficient free space for volume creation (requested / avail): 20/8.0
> 2014-08-21 16:42:49.849 1469 ERROR cinder.scheduler.flows.create_volume [r.] Failed to schedule_create_volume: No valid host was found.
>
> here?s our /etc/cinder/cinder.conf
>
> ? cut ?
> [DEFAULT]
> rootwrap_config = /etc/cinder/rootwrap.conf
> api_paste_confg = /etc/cinder/api-paste.ini
> # iscsi_helper = tgtadm
> volume_name_template = volume-%s
> # volume_group = cinder-volumes
> verbose = True
> auth_strategy = keystone
> state_path = /var/lib/cinder
> lock_path = /var/lock/cinder
> volumes_dir = /var/lib/cinder/volumes
> rabbit_host=10.2.0.10
> use_syslog=False
> api_paste_config=/etc/cinder/api-paste.ini
> glance_num_retries=0
> debug=True
> storage_availability_zone=nova
> glance_api_ssl_compression=False
> glance_api_insecure=False
> rabbit_userid=openstack
> rabbit_use_ssl=False
> log_dir=/var/log/cinder
> osapi_volume_listen=0.0.0.0
> glance_api_servers=1.2.3.4:9292
> rabbit_virtual_host=/
> scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler
> default_availability_zone=nova
> rabbit_hosts=10.2.0.10:5672
> control_exchange=openstack
> rabbit_ha_queues=False
> glance_api_version=2
> amqp_durable_queues=False
> rabbit_password=secret
> rabbit_port=5672
> rpc_backend=cinder.openstack.common.rpc.impl_kombu
> enabled_backends=quobyte,rbd
> default_volume_type=rbd
>
> [database]
> idle_timeout=3600
> connection=mysql://cinder:secret at 10.2.0.10/cinder
>
> [quobyte]
> quobyte_volume_url=quobyte://hostname.cloud.example.com/openstack-volumes
> volume_driver=cinder.volume.drivers.quobyte.QuobyteDriver
>
> [rbd-volumes]
> volume_backend_name=rbd-volumes
> rbd_pool=volumes
> rbd_flatten_volume_from_snapshot=False
> rbd_user=cinder
> rbd_ceph_conf=/etc/ceph/ceph.conf
> rbd_secret_uuid=1234-5678-ABCD-?-DEF
> rbd_max_clone_depth=5
> volume_driver=cinder.volume.drivers.rbd.RBDDriver
>
> ? cut ---
>
> any ideas?
>
> cheers
> Jens-Christian
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140822/82271d06/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux