NFS and Service Dependencies

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello all,

Facing an issue with my first ceph with cephfs deployment.

I have 2 admin (mgr) hosts, 3 mon/mds hosts, and 3 OSD hosts with 3 OSDs
each.

I also have been using the OSDs to run the nfs service that my hypervisor
connects to.

I have these service definitions I've applied:

service_type: nfs
service_id: nfs
service_name: nfs.nfs
placement:
  count: 1
  label: nfs
spec:
  port: 2049
  virtual_ip: 172.19.19.165


service_type: ingress
service_id: nfs.nfs
service_name: ingress.nfs.nfs
placement:
  count: 1
  label: nfs
spec:
  backend_service: nfs.nfs
  monitor_port: 9049
  virtual_ip: 172.19.19.165
  keepalive_only: true


and I have tagged the 3 OSD nodes with the `nfs` label.

Generally, both the ingress and the nfs service start on the same node
(osd01, for example). But sometimes, errors occur with the nfs service and
it gets rescheduled and ends up on a different node (osd03, for example) -
and because the ingress service isn't running on this node, it breaks my
NFS mount and the hypervisor gets very unhappy.

Is there a way I can specify some kind of scheduling dependency between the
nfs.nfs and ingress.nfs.nfs services so that they will get scheduled to run
on the same node so that the virtual IP is present for nfs to bind to? Or,
am I doing something wrong here?

Thanks a bunch!

*Alex*
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux