On Sat, Aug 21, 2021 at 12:48 AM David Prude <david@xxxxxxxxxxxxxxxx> wrote: > > It appears my previous message may have been malformed. Attached is the > mds log from the time period mentioned. > > -David > > On 8/20/21 8:59 PM, Patrick Donnelly wrote: > > Hello David, > > > > On Fri, Aug 20, 2021 at 7:17 AM David Prude <david@xxxxxxxxxxxxxxxx> wrote: > >> Hello, > >> > >> We have a cluster initially installed with pacific 16.2.5 within > >> which we have a single cephfs volume created as such: > >> > >> ceph fs volume create dncephfs > >> > >> We have a client.admin client: > >> > >> [client.admin] > >> key = REDACTED > >> caps mds = "allow *" > >> caps mgr = "allow *" > >> caps mon = "allow *" > >> caps osd = "allow *" > >> > >> We have dncephfs mounted as this client: > >> > >> mount | grep dncephfs > >> 10.30.0.145:6789,10.30.0.146:6789,10.30.0.147:6789,10.30.0.148:6789,10.30.0.149:6789:/ > >> on /mnt/dncephfs type ceph (rw,noatime,name=admin,secret=<hidden>,acl) > >> > >> We can create and delete snapshots at the / of this cephfs: > >> > >> root@ceph-01:~# cd /mnt/dncephfs/ > >> root@ceph-01:/mnt/dncephfs# cd .snap > >> root@ceph-01:/mnt/dncephfs/.snap# mkdir testsnapshot1 > >> > >> And confirm that the snapshot is seen by the mds: > >> > >> root@ceph-01:/# ceph daemon mds.dncephfs.ceph-01.zmlkdd dump snaps > >> { > >> "last_created": 14, > >> "last_destroyed": 13, > >> "snaps": [ > >> { > >> "snapid": 14, > >> "ino": 1, > >> "stamp": "2021-08-20T11:05:17.129353+0000", > >> "name": "testsnapshot1", > >> "metadata": {} > >> } > >> ] > >> } > >> > >> However, we are unable to create snapshots within any sub-directories of / : > >> > >> root@ceph-01:/mnt/dncephfs# mkdir exampledir > >> root@ceph-01:/mnt/dncephfs# cd exampledir/.snap > >> root@ceph-01:/mnt/dncephfs/exampledir/.snap# mkdir examplesnapshot > >> mkdir: cannot create directory ‘examplesnapshot’: Operation > >> not permitted > > Have you tried remounting? Updating auth credentials usually requires remount. > > > > > >> We initially were mounting this volume under a different client which > >> did not have rwps. We tried explicitly providing rwps to that client and > >> then moved on to testing with our client.admin (with auth listed above). > > I cannot reproduce the problem... > > > >> We have tried explicitly setting "ceph fs set dncephfs allow_new_snaps > >> true" which had no effect. We have search the mds logs and no entries > >> appear on the snapshot creation failure. > >> > >> Does anyone have any idea what may be going on or further information we > >> should be looking at to resolve this? > > Next thing to do is collect mds logs: > > > > ceph config set mds debug_mds 20 > > ceph config set mds debug_ms 1 > > > > And share a snippet during the failure. Did you set the new "subvolume" flag on your root directory? The probable location for EPERM is here: https://github.com/ceph/ceph/blob/d4352939e387af20531f6bfbab2176dd91916067/src/mds/Server.cc#L10301 You can unset it using: setfattr -n ceph.dir.subvolume -v 0 /path/to/cephfs/root -- Patrick Donnelly, Ph.D. He / Him / His Principal Software Engineer Red Hat Sunnyvale, CA GPG: 19F28A586F808C2402351B93C3301A3E258DD79D _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx