Hi Eugen,
The cinder keyring used by the 2 pools is the same, the rbd command
works using this keyring and ceph.conf used by Openstack while the rados
ls command stays stuck.
I tried with the previous ceph-common version used 10.2.5 and the last
ceph version 14.2.1.
With the Nautilus ceph-common version, the 2 cinder-volume services
crashed...
Adrien
Le 02/07/2019 à 13:50, Eugen Block a écrit :
Hi,
did you try to use rbd and rados commands with the cinder keyring, not
the admin keyring? Did you check if the caps for that client are still
valid (do the caps differ between the two cinder pools)?
Are the ceph versions on your hypervisors also nautilus?
Regards,
Eugen
Zitat von Adrien Georget <adrien.georget@xxxxxxxxxxx>:
Hi all,
I'm facing a very strange issue after migrating my Luminous cluster
to Nautilus.
I have 2 pools configured for Openstack cinder volumes with multiple
backend setup, One "service" Ceph pool with cache tiering and one
"R&D" Ceph pool.
After the upgrade, the R&D pool became inaccessible for Cinder and
the cinder-volume service using this pool can't start anymore.
What is strange is that Openstack and Ceph report no error, Ceph
cluster is healthy, all OSDs are UP & running and the "service" pool
is still running well with the other cinder service on the same
openstack host.
I followed exactly the upgrade procedure
(https://ceph.com/releases/v14-2-0-nautilus-released/#upgrading-from-mimic-or-luminous),
no problem during the upgrade but I can't understand why Cinder still
fails with this pool.
I can access, list, create volume on this pool with rbd or rados
command from the monitors, but on the Openstack hypervisor the rbd or
rados ls command stay stuck and rados ls give this message
(|134.158.208.37 is an OSD node,10.158.246.214 an Openstack
hypervisor) |:
|2019-07-02 11:26:15.999869 7f63484b4700 0 --
10.158.246.214:0/1404677569 >> 134.158.208.37:6884/2457222
pipe(0x555c2bf96240 sd=7 :0 s=1 pgs=0 cs=0 l=1 c=0x555c2bf97500).fault|
ceph version 14.2.1
Openstack Newton
I spent 2 days checking everything on Ceph side but I couldn't find
anything problematic...
If you have any hints which can help me, I would appreciate :)
Adrien
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com