Re: nfs4 kerberos

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Ian!

I will try and work on it today.

Dan

On Wed, 2011-04-06 at 22:55 -0700, Ian Hayes wrote:
> I whipped up a quick NFS4 cluster in Xen after I got home, and tried
> to remember what I did to make it work. After I bit, it all fell back
> into place. This is quick and dirty, and not how I would do things in
> production, but it's a good start. Note that I didn't set up a shared
> filesystem, but that should be academic at this point
> 
> 1) Create your nfs/nfsserver.mydomain keytab
> 2) Copy keytab to both node1 and node2
> 3) Modify /etc/init.d/portmap- in the start function, add "hostname
> nfsserver.mydomain". In the stop function, add "hostname
> nodeX.mydomain"
> 4) Drop something that looks like the attached cluster.conf file
> in /etc/cluster
> 5) Set up your exports: /exports     gss/krb5p(rw,async,fsid=0)
> 6) Start CMAN and RGManager
> 7) ?
> 8) Profit - mount -t nfs4 nfsserver.mydomain:/ /mnt/exports -o
> sec=krb5p
> 
> The trick here is that we change the hostname before any Kerberized
> services start, so it will be happy when it tries to read the keytab.
> Also, I use all Script resources instead of the NFS resource. I never
> really liked it, and I'm old and set in my ways. This works, and I'm
> certain that it reads /etc/exports. First, we set up the IP, then
> start each necessary daemon as a dependency for the next. I've been
> bouncing the service back and forth for the last 10 minutes and only
> suffering from a complaint of a stale NFS mount on my client whenever
> I failover.
> 
> <?xml version="1.0"?>
> <cluster config_version="2" name="NFS">
>         <fence_daemon post_fail_delay="0" post_join_delay="3"/>
>         <clusternodes>
>                 <clusternode name="node1.mydomain" nodeid="1"
> votes="1">
>                         <fence>
>                                 <method name="1">
>                                         <device name="Fence_Manual"
> nodename="node1.mydomain"/>
>                                 </method>
>                         </fence>
>                 </clusternode>
>                 <clusternode name="node2.mydomain" nodeid="2"
> votes="1">
>                         <fence>
>                                 <method name="1">
>                                         <device name="Fence_Manual"
> nodename="node2.mydomain"/>
>                                 </method>
>                         </fence>
>                 </clusternode>
>         </clusternodes>
>         <cman expected_votes="1" two_node="1"/>
>         <fencedevices>
>                 <fencedevice agent="fence_manual"
> name="Fence_Manual"/>
>         </fencedevices>
>         <rm>
>                 <failoverdomains>
>                         <failoverdomain name="NFS" ordered="0"
> restricted="1">
>                                 <failoverdomainnode
> name="node1.mydomain" priority="1"/>
>                                 <failoverdomainnode
> name="node2.mydomain" priority="1"/>
>                         </failoverdomain>
>                 </failoverdomains>
>                 <resources>
>                         <ip address="192.168.0.73" monitor_link="1"/>
>                         <script file="/etc/init.d/portmap"
> name="Portmapper"/>
>                         <script file="/etc/init.d/rpcgssd"
> name="RPCGSSD"/>
>                         <script file="/etc/init.d/rpcidmapd"
> name="IDMAPD"/>
>                         <script file="/etc/init.d/nfs" name="NFS"/>
>                 </resources>
>                 <service autostart="1" domain="NFS" name="NFS"
> recovery="relocate">
>                         <ip ref="192.168.0.73">
>                                 <script ref="Portmapper">
>                                         <script ref="RPCGSSD">
>                                                 <script ref="IDMAPD">
>                                                         <script
> ref="NFS"/>
>                                                 </script>
>                                         </script>
>                                 </script>
>                         </ip>
>                 </service>
>         </rm>
> </cluster>
> 
> On Wed, Apr 6, 2011 at 6:52 PM, Ian Hayes <cthulhucalling@xxxxxxxxx>
> wrote:
>         Shouldnt have to recompile rpc.gssd. On failover I migrated
>         the ip address first, made portmapper a depend on the ip,
>         rpc.gssd depend on portmap and nfsd depend on rpc. As for the
>         hostname, I went with the inelegant solution of putting a
>         'hostname' command in the start functions of the portmapper
>         script since that fires first in my config.
>         
>         > On Apr 6, 2011 6:06 PM, "Daniel R. Gore"
>         > <danielgore@xxxxxxxxxxx> wrote:
>         > 
>         > I also found this thread, after many searches.
>         > http://linux-nfs.org/pipermail/nfsv4/2009-April/010583.html
>         > 
>         > As I read through it, there appears to be a patch for
>         > rpc.gssd which
>         > allows for the daemon to be started and associated with
>         > multiple hosts.
>         > I do not want to compile rpc.gssd and it appears the patch
>         > is from over
>         > two years ago.  I would hope that RHEL6 would have rpc.gssd
>         > patched to
>         > meet this requirement, but no documentation appear to exist
>         > for how to
>         > use it.
>         > 
>         > 
>         > 
>         > 
>         > On Wed, 2011-04-06 at 20:23 -0400, Daniel R. Gore wrote:
>         > > Ian,
>         > > 
>         > > Thanks for the info. 
>         > > 
>         > 
>         > >...
>         > 
> 
> 
> -- 
> This message has been scanned for viruses and 
> dangerous content by MailScanner, and is 
> believed to be clean. 
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster



-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux