Re: Global power failure, OpenStack Nova/libvirt/KVM, and Ceph RBD locks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Florian,

Thanks for posting about this issue. This is something that we have
been experiencing (stale exclusive locks) with our OpenStack and Ceph
cloud more frequently as our datacentre has had some reliability
issues recently with power and cooling causing several unexpected
shutdowns.

At this point we are on Ceph Mimic 13.2.6 and reading through this
thread and related links I just wanted to confirm if I have the
correct caps for cinder clients as listed below as we have upgraded
through many major Ceph versions over the years and I'm sure a lot of
our configs and settings still contain deprecated options.

client.cinder
key: sanitized==
caps: [mgr] allow r
caps: [mon] profile rbd
caps: [osd] allow class-read object_prefix rbd_children, profile rbd
pool=volumes, profile rbd pool=vms, profile rbd pool=images

>From what I read, the blacklist permission was something that was
supposed to be applied pre-Luminous upgrade but once you are on
Luminous or later, it's no longer needed assuming you have switched to
using the rbd profile.

On Fri, Nov 15, 2019 at 11:05 AM Paul Emmerich <paul.emmerich@xxxxxxxx> wrote:
>
> To clear up a few misconceptions here:
>
> * RBD keyrings should use the "profile rbd" permissions, everything
> else is *wrong* and should be fixed asap
> * Manually adding the blacklist permission might work but isn't
> future-proof, fix the keyring instead
> * The suggestion to mount them elsewhere to fix this only works
> because "elsewhere" probably has an admin keyring, this is a bad
> work-around, fix the keyring instead
> * This is unrelated to openstack and will happen with *any* reasonably
> configured hypervisor that uses exclusive locking
>
> This problem usually happens after upgrading to Luminous without
> reading the change log. The change log tells you to adjust the keyring
> permissions accordingly
>
> Paul
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
> On Fri, Nov 15, 2019 at 4:56 PM Joshua M. Boniface <joshua@xxxxxxxxxxx> wrote:
> >
> > Thanks Simon! I've implemented it, I guess I'll test it out next time my homelab's power dies :-)
> >
> > On 2019-11-15 10:54 a.m., Simon Ironside wrote:
> >
> > On 15/11/2019 15:44, Joshua M. Boniface wrote:
> >
> > Hey All:
> >
> > I've also quite frequently experienced this sort of issue with my Ceph RBD-backed QEMU/KVM
> >
> > cluster (not OpenStack specifically). Should this workaround of allowing the 'osd blacklist'
> >
> > command in the caps help in that scenario as well, or is this an OpenStack-specific
> >
> > functionality?
> >
> > Yes, my use case is RBD backed QEMU/KVM too, not Openstack. It's
> > required for all RBD clients.
> >
> > Simon
> >
> > _______________________________________________
> >
> > ceph-users mailing list
> >
> > ceph-users@xxxxxxxxxxxxxx
> >
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux