This problem of inaccessible file systems post upgrade by other than
client.admin date back from v14 carries on through v17. It also applies
to any case of specifying other than the default pool names for new file
systems. Solved because Curt remembered link on this list. (Thanks
Curt!) Here's what the official ceph docs ought have provided, for
others who hit this. YMMV:
IF
you have ceph file systems which have data and meta data pools that
were specified in the 'ceph fs new' command (meaning not left to the
defaults which create them for you),
OR
you have an existing ceph file system and are upgrading to a new
major version of ceph
THEN
for the documented 'ceph fs authorize...' commands to do as
documented (and avoid strange 'operation not permitted' errors when
doing file I/O or similar security related problems for all but such
as the client.admin user), you must first run:
ceph osd pool application set <your metadata pool name> cephfs
metadata <your ceph fs filesystem name>
and
ceph osd pool application set <your data pool name> cephfs data
<your ceph fs filesystem name>
Otherwise when the OSD's get a request to read or write data (not
the directory info, but file data) they won't know which ceph file
system name to look up, nevermind the names you may have chosen for
the pools, as the 'defaults' themselves changed in the major
releases, from
data pool=fsname
metadata pool=fsname_metadata
to
data pool=fsname.data and
metadata pool=fsname.meta
as the ceph revisions came and went. Any setup that just used
'client.admin' for all mounts didn't see the problem as the admin
key gave blanket permission.
A temporary 'fix' is to change mount requests to the 'client.admin'
and associated key. A less drastic but still half-fix is to change
the osd cap for your user to just 'caps osd = "allow rw" and delete
"tag cephfs data=...."
The only documentation I could find for this upgrade security-related
ceph-ending catastrophe was in the NFS, not cephfs docs:
https://docs.ceph.com/en/latest/cephfs/nfs/
and the Genius level much appreciated pointer from Curt here:
On 5/2/23 14:21, Curt wrote:
This thread might be of use, it's an older version of ceph 14, but
might still apply,
https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/23FDDSYBCDVMYGCUTALACPFAJYITLOHJ/
?
On Tue, May 2, 2023 at 11:06 PM Harry G Coin <hgcoin@xxxxxxxxx> wrote:
In 17.2.6 is there a security requirement that pool names
supporting a
ceph fs filesystem match the filesystem name.data for the data and
name.meta for the associated metadata pool? (multiple file systems
are
enabled)
I have filesystems from older versions with the data pool name
matching
the filesystem and appending _metadata for that,
and even older filesystems with the pool name as in 'library' and
'library_metadata' supporting a filesystem called 'libraryfs'
The pools all have the cephfs tag.
But using the documented:
ceph fs authorize libraryfs client.basicuser / rw
command allows the root user to mount and browse the library
directory
tree, but fails with 'operation not permitted' when even reading
any file.
However, changing the client.basicuser osd auth to 'allow rw'
instead of
'allow rw tag...' allows normal operations.
So:
[client.basicuser]
key = <key stuff>==
caps mds = "allow rw fsname=libraryfs"
caps mon = "allow r fsname=libraryfs"
caps osd = "allow rw"
works, but the same with
caps osd = "allow rw tag cephfs data=libraryfs"
leads to the 'operation not permitted' on read, or write or any
actual
access.
It remains a puzzle. Help appreciated!
Were there upgrade instructions about that, any help pointing me
to them?
Thanks
Harry Coin
Rock Stable Systems
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx