Re: 17.2.6 fs 'ls' ok, but 'cat' 'operation not permitted' puzzle

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Eugen,

TL;DR:
https://github.com/ceph/ceph/pull/51359
Thanks, Eugen. Thanks, Harry G Coin.

Longer version:
Thanks for bringing this to my attention.

I've added Harry G Coin's excellent procedure to the Troubleshooting page in the CephFS documentation. The PR that contains the commit that targets the main branch is here: https://github.com/ceph/ceph/pull/51359

The Troubleshooting page might not be the permanent location for this procedure (and, to be honest, it seems to me that maybe using documentation to solve this kind of problem would possibly not the first choice of the informed system operator), but I wanted to get this into the documentation as soon as possible. 

I've added Venky as a reviewer of PR#51359 and I've CCed him in this email. His review should guard against any accidental introduction of instructions that might lead readers to engage in unacceptable practices.

Zac Dover
Upstream Docs
Ceph Foundation




------- Original Message -------
On Wednesday, May 3rd, 2023 at 5:32 PM, Eugen Block <eblock@xxxxxx> wrote:


> 
> 
> Hi,
> 
> we had the NFS discussion a few weeks back [2] and at the Cephalocon I
> talked to Zac about it.
> 
> @Zac: seems like not only NFS over CephFS is affected but CephFS in
> general. Could you add that note about the application metadata to the
> general CephFS docs as well?
> 
> Thanks,
> Eugen
> 
> [2]
> https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/2NL2Q57HTSGDDBLARLRCVRVX2PE6FKDA/
> 
> Zitat von Harry G Coin hgcoin@xxxxxxxxx:
> 
> > This problem of inaccessible file systems post upgrade by other than
> > client.admin date back from v14 carries on through v17. It also
> > applies to any case of specifying other than the default pool names
> > for new file systems. Solved because Curt remembered link on this
> > list. (Thanks Curt!) Here's what the official ceph docs ought have
> > provided, for others who hit this. YMMV:
> > 
> > IF
> > 
> > you have ceph file systems which have data and meta data pools that
> > were specified in the 'ceph fs new' command (meaning not left to the
> > defaults which create them for you),
> > 
> > OR
> > 
> > you have an existing ceph file system and are upgrading to a new
> > major version of ceph
> > 
> > THEN
> > 
> > for the documented 'ceph fs authorize...' commands to do as
> > documented (and avoid strange 'operation not permitted' errors when
> > doing file I/O or similar security related problems for all but such
> > as the client.admin user), you must first run:
> > 
> > ceph osd pool application set <your metadata pool name> cephfs
> > metadata <your ceph fs filesystem name>
> > 
> > and
> > 
> > ceph osd pool application set <your data pool name> cephfs data
> > <your ceph fs filesystem name>
> > 
> > Otherwise when the OSD's get a request to read or write data (not
> > the directory info, but file data) they won't know which ceph file
> > system name to look up, nevermind the names you may have chosen for
> > the pools, as the 'defaults' themselves changed in the major
> > releases, from
> > 
> > data pool=fsname
> > metadata pool=fsname_metadata
> > 
> > to
> > 
> > data pool=fsname.data and
> > metadata pool=fsname.meta
> > 
> > as the ceph revisions came and went. Any setup that just used
> > 'client.admin' for all mounts didn't see the problem as the admin
> > key gave blanket permission.
> > 
> > A temporary 'fix' is to change mount requests to the 'client.admin'
> > and associated key. A less drastic but still half-fix is to change
> > the osd cap for your user to just 'caps osd = "allow rw" and delete
> > "tag cephfs data=...."
> > 
> > The only documentation I could find for this upgrade
> > security-related ceph-ending catastrophe was in the NFS, not cephfs
> > docs:
> > 
> > https://docs.ceph.com/en/latest/cephfs/nfs/
> > 
> > and the Genius level much appreciated pointer from Curt here:
> > 
> > On 5/2/23 14:21, Curt wrote:
> > 
> > > This thread might be of use, it's an older version of ceph 14, but
> > > might still apply,
> > > https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/23FDDSYBCDVMYGCUTALACPFAJYITLOHJ/
> > > ?
> > > 
> > > On Tue, May 2, 2023 at 11:06 PM Harry G Coin hgcoin@xxxxxxxxx wrote:
> > > 
> > > In 17.2.6 is there a security requirement that pool names
> > > supporting a
> > > ceph fs filesystem match the filesystem name.data for the data and
> > > name.meta for the associated metadata pool? (multiple file systems
> > > are
> > > enabled)
> > > 
> > > I have filesystems from older versions with the data pool name
> > > matching
> > > the filesystem and appending _metadata for that,
> > > 
> > > and even older filesystems with the pool name as in 'library' and
> > > 'library_metadata' supporting a filesystem called 'libraryfs'
> > > 
> > > The pools all have the cephfs tag.
> > > 
> > > But using the documented:
> > > 
> > > ceph fs authorize libraryfs client.basicuser / rw
> > > 
> > > command allows the root user to mount and browse the library
> > > directory
> > > tree, but fails with 'operation not permitted' when even reading
> > > any file.
> > > 
> > > However, changing the client.basicuser osd auth to 'allow rw'
> > > instead of
> > > 'allow rw tag...' allows normal operations.
> > > 
> > > So:
> > > 
> > > [client.basicuser]
> > > key = <key stuff>==
> > > caps mds = "allow rw fsname=libraryfs"
> > > caps mon = "allow r fsname=libraryfs"
> > > caps osd = "allow rw"
> > > 
> > > works, but the same with
> > > 
> > > caps osd = "allow rw tag cephfs data=libraryfs"
> > > 
> > > leads to the 'operation not permitted' on read, or write or any
> > > actual
> > > access.
> > > 
> > > It remains a puzzle. Help appreciated!
> > > 
> > > Were there upgrade instructions about that, any help pointing me
> > > to them?
> > > 
> > > Thanks
> > > 
> > > Harry Coin
> > > Rock Stable Systems
> > > 
> > > _______________________________________________
> > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > 
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> 
> 
> 
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux