Re: NFS and Service Dependencies

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Possible that you just miss frontend_port?
https://docs.ceph.com/en/reef/cephadm/services/nfs/#high-availability-nfs



On Sat, 9 Nov 2024 at 21:43, Tim Holloway <timh@xxxxxxxxxxxxx> wrote:

> Hmmmm. I have somewhat similar issues, and I'm not entirely happy with
> what I've got, but let me fill you in.
>
> Ceph supports NFS by launching instances of Ganesha-nfs. If you're using
> managed services, this will be run out of the master Ceph container
> image and the name of this container is rather long and ugly. You could
> set up a systemd dependency on this name, but it's not a pretty thing to
> do.
>
> I'm not sure how much putting an "nfs" tag on a server does other than
> simply labelling it. On the other hand, you can configure NFS servers on
> multiple ceph nodes and those nodes will always be available as NFS
> servers . It's not a Highlander situation where only one can be active
> at a time. I keep 2 going myself.
>
> The documentation makes reference to the idea that keepalived could
> failover NFS, but there are cautions. It's worth trying, though.
>
>     Tim
>
> On 11/8/24 17:56, Alex Buie wrote:
> > Hello all,
> >
> > Facing an issue with my first ceph with cephfs deployment.
> >
> > I have 2 admin (mgr) hosts, 3 mon/mds hosts, and 3 OSD hosts with 3 OSDs
> > each.
> >
> > I also have been using the OSDs to run the nfs service that my hypervisor
> > connects to.
> >
> > I have these service definitions I've applied:
> >
> > service_type: nfs
> > service_id: nfs
> > service_name: nfs.nfs
> > placement:
> >    count: 1
> >    label: nfs
> > spec:
> >    port: 2049
> >    virtual_ip: 172.19.19.165
> >
> >
> > service_type: ingress
> > service_id: nfs.nfs
> > service_name: ingress.nfs.nfs
> > placement:
> >    count: 1
> >    label: nfs
> > spec:
> >    backend_service: nfs.nfs
> >    monitor_port: 9049
> >    virtual_ip: 172.19.19.165
> >    keepalive_only: true
> >
> >
> > and I have tagged the 3 OSD nodes with the `nfs` label.
> >
> > Generally, both the ingress and the nfs service start on the same node
> > (osd01, for example). But sometimes, errors occur with the nfs service
> and
> > it gets rescheduled and ends up on a different node (osd03, for example)
> -
> > and because the ingress service isn't running on this node, it breaks my
> > NFS mount and the hypervisor gets very unhappy.
> >
> > Is there a way I can specify some kind of scheduling dependency between
> the
> > nfs.nfs and ingress.nfs.nfs services so that they will get scheduled to
> run
> > on the same node so that the virtual IP is present for nfs to bind to?
> Or,
> > am I doing something wrong here?
> >
> > Thanks a bunch!
> >
> > *Alex*
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux