Hello friedns, please help me. I’m trying to use CephFS with NFS export behind NFS Ingress Service. We are running 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable) Everything works fine until We try to mount datastore to multiple ESXi hosts (HPE-ESXi-6.5.0-Update2-iso-preGen9-650.U2.9.6.7.1) with vCenter (7.0.1). My setup: CephFS -> vSphere with subvolumes Public & Private NFS service on three nodes (10.99.112.[1-3]) with service like: service_type: nfs service_id: vSphere placement: label: nfs spec: port: 12345 This is working fine. Then Ingress for NFS service to get one IP to connect from clients like: service_type: ingress service_id: nfs.vSphere placement: label: nfs spec: backend_service: nfs.vSphere frontend_port: 2049 monitor_port: 9000 virtual_ip: 10.99.112.62/26 And NFS export like: Access Type RW CephFS Filesystem vSphere CephFS User nfs.vSphere.1 Cluster vSphere NFS Protocol NFSv4 Path /volumes/Private Pseudo /ceph Security Label Squash no_root_squash Storage Backend CephFS Transport TCP, UDP Till this point is everything working as expected, but when We want to add new Datastore to ESXi hosts within cluster like: Name Ceph-NFS Folder /ceph Server 10.99.112.61 it will add New Datastore to every one host like: ESXi1 -> Ceph-NFS ESXi2 -> Ceph-NFS(1) what is strange for me. We want to share one NFS export like Ceph-NFS across multiple hosts and not create multiple datastores. Is there somebody who is using NFS with vSphere cluster to help us where can be problem ? Thank you. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx