Re: rbd ls operation not permitted

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Oct 8, 2018 at 11:33 AM <sinan@xxxxxxxx> wrote:
>
> Thanks, changing rxw to rwx solved the problem. But again, it is
> strange.  I am issuing the rbd command against the ssdvolumes pool and
> not ssdvolumes-13. And why does "allow *" on the mon solves the problem.
> I am a bit lost :-)
>
> --
> This does work
> --
> caps: [mon] allow *
> caps: [osd] allow *
> $ rbd ls -p ssdvolumes --id openstack
> volume-e61ec087-e654-471b-975f-f72b753a3bb0
> $
>
>
> --
> This does NOT work
> --
> caps: [mon] allow r
> caps: [osd] allow class-read object_prefix rbd_children, allow rwx
> pool=ssdvolumes, allow rxw pool=ssdvolumes-13, allow rwx
> pool=sasvolumes-13, allow rwx pool=sasvolumes, allow rwx pool=vms, allow
> rwx pool=images
> $ rbd ls -p ssdvolumes --id openstack
> rbd: list: (1) Operation not permitted
> $
>
>
> --
> This does work
> --
> caps: [mon] allow r
> caps: [osd] allow class-read object_prefix rbd_children, allow rwx
> pool=ssdvolumes, allow rwx pool=ssdvolumes-13, allow rwx
> pool=sasvolumes-13, allow rwx pool=sasvolumes, allow rwx pool=vms, allow
> rwx pool=images
> $ rbd ls -p ssdvolumes --id openstack
> volume-e61ec087-e654-471b-975f-f72b753a3bb0
> $
>
>
> Strange thing is, with an older rbd (like we use in Openstack Ocata) we
> don't see this behavior.

I unsuccessfully tried to re-create this using a Jewel v10.2.7 build
(MON, OSD, and client) but I received the expected "Operation not
permitted" due to the corrupt OSD caps. Starting with Jewel v10.2.11,
the monitor will now at least prevent you from setting corrupt caps on
a user.

>
> On 08-10-2018 17:04, Jason Dillaman wrote:
> > On Mon, Oct 8, 2018 at 10:20 AM <sinan@xxxxxxxx> wrote:
> >>
> >> On a Ceph Monitor:
> >> # ceph auth get client.openstack | grep caps
> >> exported keyring for client.openstack
> >>         caps mon = "allow r"
> >>         caps osd = "allow class-read object_prefix rbd_children, allow
> >> rwx
> >> pool=ssdvolumes, allow rxw pool=ssdvolumes-13, allow rwx
> >> pool=sasvolumes-13, allow rwx pool=sasvolumes, allow rwx pool=vms,
> >> allow
> >> rwx pool=images"
> >> #
> >
> > By chance, is your issue really that your OpenStack 13 cluster cannot
> > access the pool named "ssdvolumes-13"? I ask because you have a typo
> > on your "rwx" cap (you have "rxw" instead).
> >
> >>
> >> On the problematic Openstack cluster:
> >> $ ceph auth get client.openstack --id openstack | grep caps
> >> Error EACCES: access denied
> >> $
> >>
> >>
> >> When I change "caps: [mon] allow r" to "caps: [mon] allow *" the
> >> problem
> >> disappears.
> >>
> >>
> >> On 08-10-2018 16:06, Jason Dillaman wrote:
> >> > Can you run "ceph auth get client.openstack | grep caps"?
> >> >
> >> > On Mon, Oct 8, 2018 at 10:03 AM <sinan@xxxxxxxx> wrote:
> >> >>
> >> >> The result of your command:
> >> >>
> >> >> $ rbd ls --debug-rbd=20 -p ssdvolumes --id openstack
> >> >> 2018-10-08 13:42:17.386505 7f604933fd40 20 librbd: list 0x7fff5b25cc30
> >> >> rbd: list: (1) Operation not permitted
> >> >> $
> >> >>
> >> >> Thanks!
> >> >> Sinan
> >> >>
> >> >> On 08-10-2018 15:37, Jason Dillaman wrote:
> >> >> > On Mon, Oct 8, 2018 at 9:24 AM <sinan@xxxxxxxx> wrote:
> >> >> >>
> >> >> >> Hi,
> >> >> >>
> >> >> >> I am running a Ceph cluster (Jewel, ceph version 10.2.10-17.el7cp).
> >> >> >>
> >> >> >>
> >> >> >> I also have 2 OpenStack clusters (Ocata (v12) and Pike (v13)).
> >> >> >>
> >> >> >> When I perform a "rbd ls -p <pool> --id openstack" on the OpenStack
> >> >> >> Ocata cluster it works fine, when I perform the same command on the
> >> >> >> OpenStack Pike cluster I am getting an "operation not permitted".
> >> >> >>
> >> >> >>
> >> >> >> OpenStack Ocata (where it does work fine):
> >> >> >> $ rbd -v
> >> >> >> ceph version 10.2.7-48.el7cp
> >> >> >> (cf7751bcd460c757e596d3ee2991884e13c37b96)
> >> >> >> $ rpm -qa | grep rbd
> >> >> >> python-rbd-10.2.7-48.el7cp.x86_64
> >> >> >> libvirt-daemon-driver-storage-rbd-3.9.0-14.el7_5.6.x86_64
> >> >> >> librbd1-10.2.7-48.el7cp.x86_64
> >> >> >> rbd-mirror-10.2.7-48.el7cp.x86_64
> >> >> >> $
> >> >> >>
> >> >> >> OpenStack Pike (where it doesn't work, operation not permitted):
> >> >> >> $ rbd -v
> >> >> >> ceph version 12.2.4-10.el7cp
> >> >> >> (03fd19535b3701f3322c68b5f424335d6fc8dd66)
> >> >> >> luminous (stable)
> >> >> >> $ rpm -qa | grep rbd
> >> >> >> rbd-mirror-12.2.4-10.el7cp.x86_64
> >> >> >> libvirt-daemon-driver-storage-rbd-3.9.0-14.el7_5.5.x86_64
> >> >> >> librbd1-12.2.4-10.el7cp.x86_64
> >> >> >> python-rbd-12.2.4-10.el7cp.x86_64
> >> >> >> $
> >> >> >
> >> >> > Can you run "rbd --debug-rbd=20 ls -p <pool> --id openstack" and
> >> >> > pastebin the resulting logs?
> >> >> >
> >> >> >>
> >> >> >> Both clusters are using the same Ceph client key, same Ceph
> >> >> >> configuration file.
> >> >> >>
> >> >> >> The only difference is the version of rbd.
> >> >> >>
> >> >> >> Is this expected behavior?
> >> >> >>
> >> >> >>
> >> >> >> Thanks!
> >> >> >> Sinan
> >> >> >> _______________________________________________
> >> >> >> ceph-users mailing list
> >> >> >> ceph-users@xxxxxxxxxxxxxx
> >> >> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Jason
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux