Re: [cinder] Cinder & Ceph Integration Error: No Valid Backend

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



All;

The overall issue has been resolved.

There were two major causes:

Misplacement of keyring(s) (they were not within /etc/ceph/)
'openstack-cinder-volume' service was not started/enabled

Thank you,

Stephen Self 
IT Manager 

sself@xxxxxxxxxxxxxx
463 South Hamilton Court 
Gilbert, Arizona 85233 
Phone: (480) 610-3500 
Fax: (480) 610-3501 

www.performair.com



-----Original Message-----
From: SSelf@xxxxxxxxxxxxxx [mailto:SSelf@xxxxxxxxxxxxxx] 
Sent: Thursday, January 7, 2021 2:21 PM
To: ceph-users@xxxxxxx; openstack-discuss@xxxxxxxxxxxxxxxxxxx
Subject:  [cinder] Cinder & Ceph Integration Error: No Valid Backend

All;

We're having problems with our Openstack/Ceph integration. The versions we're using are Ussuri & Nautilus.

When trying to create a volume, the volume is created, though the status is stuck at 'ERROR'.

This appears to be the most relevant line from the Cinder scheduler.log:

2021-01-07 14:00:38.473 140686 ERROR cinder.scheduler.flows.create_volume [req-f86556b5-cb2e-4b2d-b556-ed07e632289d 824c26c133b34d8b8e84a7acabbe6f91 a983323b5ffc47e18660794cd9344869 - default default] Failed to run task cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create: No valid backend was found. No weighed backends available: cinder.exception.NoValidBackend: No valid backend was found. No weighed backends available

Here is the 'cinder.conf' from our Controller Node:

[DEFAULT]
# define own IP address
my_ip = 10.0.80.40
log_dir = /var/log/cinder
state_path = /var/lib/cinder
auth_strategy = keystone
enabled_backends = ceph
glance_api_version = 2
debug = true

# RabbitMQ connection info
transport_url = rabbit://openstack:<password>@10.0.80.40:5672
enable_v3_api = True

# MariaDB connection info
[database]
connection = mysql+pymysql://cinder:<password>@10.0.80.40/cinder

# Keystone auth info
[keystone_authtoken]
www_authenticate_uri = http://10.0.80.40:5000
auth_url = http://10.0.80.40:5000
memcached_servers = 10.0.80.40:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = <password>

[oslo_concurrency]
lock_path = $state_path/tmp

[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = rbd_os_volumes
rbd_ceph_conf = /etc/ceph/463/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
rbd_user = cinder
rbd_exclusive_cinder_pool = true

backup_driver = cinder.backup.drivers.ceph
backup_ceph_conf = /etc/ceph/300/ceph.conf
backup_ceph_user = cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool = rbd_os_backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true

Does anyone have any ideas as to what is going wrong?

Thank you,

Stephen Self 
IT Manager 
Perform Air International
sself@xxxxxxxxxxxxxx
www.performair.com
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux