Re: Is nfs-ganesha + kerberos actually a thing?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 13-10-2023 16:57, John Mulligan wrote:
On Friday, October 13, 2023 10:46:24 AM EDT Torkil Svensgaard wrote:
On 13-10-2023 16:40, Torkil Svensgaard wrote:
On 13-10-2023 14:00, John Mulligan wrote:
On Friday, October 13, 2023 6:11:18 AM EDT Torkil Svensgaard wrote:
Hi

We have kerberos working with bare metal kernel NFS exporting RBDs. I
can see in the ceph documentation[1] that nfs-ganesha should work with
kerberos but I'm having little luck getting it to work.

Could you clarify how you are deploying the ganesha instances?  I
think you
may be asking about cephadm deployed containers but it is not clear to
me.

There are a couple of viable ways to deploy nfs-ganesha today:
manually/"bare-
metal", cephadm, and rook.

Of course, my bad. Ceph version 17.2.6-100.el9cp, nfs-ganesha deployed
with cephadm as pr the documentation I linked:

"
ceph nfs export create cephfs --cluster-id cephfs_noHA --pseudo-path
/testkrb5p_noHA --fsname cephfs_ssd --path=/test --sectype krb5p
--sectype sys
"

"
[ceph: root@lazy /]# ceph nfs export info cephfs_noHA /testkrb5p_noHA
{

    "export_id": 1,
    "path": "/test",
    "cluster_id": "cephfs_noHA",
    "pseudo": "/testkrb5p_noHA",
    "access_type": "RW",
    "squash": "none",
    "security_label": true,
    "protocols": [
      4
    ],
    "transports": [
      "TCP"
    ],
    "fsal": {
      "name": "CEPH",
      "user_id": "nfs.cephfs_noHA.1",
      "fs_name": "cephfs_ssd"
    },
    "clients": [],
    "sectype": [
      "krb5p",
      "sys"
    ]

}
"

Works with sys, not with krb5p.

Thanks.

Mvh.

Torkil

This bit from the container log seems to suggest that some plumbing is
missing?

"
13/10/2023 08:09:12 : epoch 6528fb25 : ceph-flash1 :
ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT

:Callback creds directory (/var/run/ganesha) already exists

13/10/2023 08:09:12 : epoch 6528fb25 : ceph-flash1 :
ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file
does not specify default realm while getting default realm name
13/10/2023 08:09:12 : epoch 6528fb25 : ceph-flash1 :
ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT

:ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry

found in keytab /etc/krb5.keytab for connection with host localhost
13/10/2023 08:09:12 : epoch 6528fb25 : ceph-flash1 :
ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN

:gssd_refresh_krb5_machine_credential failed (-1765328160:0)

"

Thoughts?

Mvh.

Torkil

[1] https://docs.ceph.com/en/quincy/mgr/nfs/#create-cephfs-export

Based on your mention of "container log" above I assume this is on either
cephadm or rook.  The rook team has been actively working on adding
kerberized
nfs support and we added the `sectype` option for that work. Currently,
cephadm doesn't have support for kerberos because it lacks the server
side
components needed to connect to krb5/ldap.

I would like to see this support eventually come to cephadm but it's
not there
today IMO.

Missed this part first time around. Interesting.


No worries. I know not everyone is used to inline/bottom posting. :-)

A manual deployment of nfs-ganesha ought to also be able to make use
of this
option. Ultimately, this generates ganesha config blocks and is mostly
agnostic
of the cluster/deployment method, but I have not tried it out myself.

The config block posted above seems fine though, or? So it wouldn't
matter if deployed manually, the bits that are missing are how to get
kerberos inside the container? The container host has all the bits.


Right, even if the container host is set up for kerberos, the needed
properties dont' propagate to the container. In fact, we probably *don't* want
the host's krb5/ldap config in the container -  I'd prefer to have each
container have it's own set up. On the mgr module side that would mean every
"cluster" could have a different set of kerberos and/or ldap properties. I'm
working on SMB support which I intend to have a similar workflow: you can define
"clusters" (aka "virtual servers") that have their own optional domain
membership. Maybe at some point further in the future we can add similar
behaviors to nfs.    Hope that helps!

I agree that would be the better option but that seems a ways down the road from what you describe =)

The only way to get this to work now with the orchestrator would be to have the container inherit what the host has, or map it through, or whatever the mechanism is for podman. That's the magic trick I need.

Thanks!

Mvh.

Torkil


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

--
Torkil Svensgaard
Systems Administrator
Danish Research Centre for Magnetic Resonance DRCMR, Section 714
Copenhagen University Hospital Amager and Hvidovre
Kettegaard Allé 30, 2650 Hvidovre, Denmark

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux