Any chance that you have the OSD log for the PGs mapped to that image's header? Since that method is brand new, I wonder if the -EOPNOTSUPP error is being somehow mapped to -EPERM under FreeBSD(?). On Mon, Jul 3, 2017 at 3:24 PM, Willem Jan Withagen <wjw@xxxxxxxxxxx> wrote: > > > Op 3-7-2017 om 20:00 schreef Gregory Farnum: >> >> On Mon, Jul 3, 2017 at 10:51 AM, Willem Jan Withagen <wjw@xxxxxxxxxxx> >> wrote: >>> >>> Hi, >>> >>> I do not seem to have luck with working with the rights structure in >>> Ceph. But please help me out with my ignorance.... >>> >>> I have setup the FreeBSD cluster running in 5 ceph jails/vms. That works >>> as a charm, and I can use ceph-fuse to mount a FS and write/read/... on >>> it when I'm outside the ceph jails. For that I copies ceph.conf and >>> ceph.client.admin.keyring to the current system I would like to connect >>> with. >>> >>> Also things like ceph -s and rados commands just work as a charm. >>> >>> So the next check is to test rbd-ggate, the freebsd variant to map >>> rbd-images to a device.... But then I start to get things like: >>> ---- >>> # rbd info rbd/testrbdggate45846 >>> 2017-07-03 19:41:00.562567 80ee53b00 -1 librbd::image::OpenRequest: >>> failed to retrieve create_timestamp: (1) Operation not permitted >>> 2017-07-03 19:41:00.562685 80f8b2000 -1 librbd::ImageState: 0x80ef23640 >>> failed to open image: (1) Operation not permitted >>> rbd: error opening image testrbdggate45846: (1) Operation not permitted >>> ---- >>> >>> Which is definitly a rights issue, because I can do that without much >>> trouble on one of the Ceph servers: >>> ---- >>> # jexec ceph_0 rbd info rbd/testrbdggate45846 >>> rbd image 'testrbdggate45846': >>> size 65536 kB in 16 objects >>> order 22 (4096 kB objects) >>> block_name_prefix: rbd_data.10e3c0b1daf >>> format: 2 >>> features: layering, exclusive-lock, object-map, fast-diff, >>> deep-flatten >>> flags: >>> ---- >>> >>> So why does this work from a server that is actually running >>> mon/osd/mgr, but not from a server that has the same >>> ceph.client.admin.keyring: >>> [client.admin] >>> key = AQBUXllZMNZSARAAsSexLBkWJQS6vHi9u3rrSA== >>> auid = 0 >>> caps mds = "allow *" >>> caps mgr = "allow *" >>> caps mon = "allow *" >>> caps osd = "allow *" >>> >>> and ceph auth list: >>> installed auth entries: >>> client.admin >>> key: AQBUXllZMNZSARAAsSexLBkWJQS6vHi9u3rrSA== >>> auid: 0 >>> caps: [mds] allow * >>> caps: [mgr] allow * >>> caps: [mon] allow * >>> caps: [osd] allow * > > >>> client.ggate >>> key: AQANP1pZGNDmNRAAVb8NRXUMmWYh9i1nju6rYA== >>> caps: [mon] allow r >>> caps: [osd] allow class-read object_prefix rbd_children, allow >>> rwx pool=rbd >> >> Presumably ggate is using this client. I'm not sure off-hand what >> permissions RBD needs, but I presume it needs class-write in order to >> create new images (and they probably won't be prefixed with >> rbd_children?). > > > Well, client.ggate was an attempt to mirror what we have in our working > OpenStack ceph. > > But my major concern is that it all works on a host that is actually part of > the cluster (as admin), > but it does not work for all rbd commands once I start working on a separate > host. > Ceph and rados are oke, but certain rbs commands fail. > > So why doesn't it work as admin on an 'external' host. > > > --WjW > > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html -- Jason -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html