Re: cephx questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 30 Jul 2010, Nathan Regola wrote:
> Hi,
> 
> Thank you for this excellent project.  Are you still planning on 
> incorporating the code from the Maat paper at some time in the future? 
> As I'm sure you already know, access control lists on the storage 
> objects would be quite useful to store research data. 

We may.  The question is what granularity of security people really need.  
Maat essentially gives fine-grains access control on individual objects 
storing file data, based on the user you are logged in as on the client.  
But in order for this to be very useful, you need some additional 
user-level distributed authentiation scheme like Kerberos, or else a 
misbehaving client can pose as any user on that host.  Currently, the 
kernel client authenticates to cephx and is given a capability to access 
all objects in the file system object pool.. the same set of objects it 
can access by traversing the namespace as 'root'.  It doesn't get access 
to other object pools in the distributed object store (e.g., metadata 
objects, rbd images, or other pools you may create).

What kind of objects (or files?) do you want to protect with ACLs?  Do you 
need object granularity, or is pool granularity sufficient?

> We are experimenting with a small ceph cluster (3 older Sun V20z servers 
> with 2 SCSI drives each). To give you a brief overview of our setup, we 
> have one MDS/monitoring node and 2 OSDs.  The OSDs have 16GB of RAM and 
> the MDS/monitoring node has 12GB.  We are using btrfs on a dedicated 
> drive, mounted as /btrfs (each node has this mount). On the OSDs, we 
> also use a partition on the boot disk as /btrfs_journal for the journal 
> and the data is on the second drive, mounted as /btrfs.  We only have 
> toy data on the file system, so we can reformat the ceph file system as 
> needed.   We attempted an in place upgrade to 0.21 yesterday.   Upon 
> restarting the ceph processes on the MDS/monitoring node, we could no 
> longer login to ceph due to cephx authentication failure.  We did not 
> change the configuration in /etc/ceph/ceph.conf. I was able to disable 
> cephx, and eventually imported some keys and put the admin_keyring.bin 
> file in /etc/ceph/keyring.bin and I can login now.  Was the code changed 
> to enforce cephx or stop looking in certain locations for keys?  I 
> believe it was enabled before, but it never complained.

Ah, I suspect that you had/have

	auth supported = cephx

in your ceph, which is what the wiki had in the sample ceph.conf.  The 
code in v0.20.* was actually looking for 'supported auth', though (fixed 
in v0.21).  The cephx stuff was probably just turned off before now, and 
you didn't notice that your admin keyring wasn't in place.

> Also, what causes an MDS to be blacklisted?  Until I fixed the 
> authentication issues, "ceph mds stat" returned "3 blacklisted MDS".  We 

Any time an mds crashes (or is declared crashed by the monitor, due to not 
sending regular heartbeats), it's address is blacklisted in the osd map to 
prevent a runaway/stale process from interfering with a new cmds instance.  
It's completely normal.  The entry will be expired on its own after 
several (24?) hours.

> I was interested in more detailed information on cephx (where is the 
> auth database stored) and what the various keyring lines in the 
> ceph.conf file control.

The monitor keeps the auth data in $mon_data/auth/*.  It's not in a form 
that you can manipulate manually, unfortunately.

> Also, is there a baseline sample of what your 
> "ceph auth list" output should look like and the minimum capabilities 
> various entities need to function?

The cauthtool man page has a minimal set of caps for a machine to mount 
the file system.  Basically,

client.foo
        key: asdf
        caps: [mds] allow
        caps: [mon] allow r
        caps: [osd] allow rw; allow rw pool=0

Or, if you're lazy, you can just use the client.admin key.  Just be sure 
to specify both name=admin,secret=asdf (or name=foo,secret=asdf, if using 
the above) as a mount option.  (The 'client.' name prefix is assumed.)

I hope that answers your questions?
sage


> I pasted a redacted version of our 
> "auth list" output below. Perhaps we have an error in our authorization 
> list that caused the errors we experienced after the upgrade (and before 
> I changed some things).  I can send a ceph.conf file as well if you need 
> it. 
> 
> We can provide feedback on cephx code if you need it, as we were 
> planning on keeping our cluster "cephx enabled". 
> 
> Thanks,
> 
> Nathan Regola
> Grid and Cloud Computing Analyst
> University of Notre Dame
> Center for Research Computing
> P.O. Box 539
> Room 110 Information Technology Center
> Notre Dame, IN  46556
> 
> Phone: 574-631-5287
> 
> 
> 10.07.30_13:07:08.584179 mon <- [auth,list]
> 10.07.30_13:07:08.584825 mon0 -> 'installed auth entries: 
> mon.0
>         key: T
>         caps: [mon] allow *
> mds.opteron03
>         key: U
>         caps: [mds] allow
>         caps: [mon] allow rwx
>         caps: [osd] allow *
> osd.0
>         key: V
>         caps: [mon] allow rwx
>         caps: [osd] allow *
> osd.1
>         key: W
>         caps: [mon] allow rwx
>         caps: [osd] allow *
> client.admin
>         key: X
>         caps: [mds] allow
>         caps: [mon] allow *
>         caps: [osd] allow *
> ' (0)
> 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 

[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux