On Thu, Sep 8, 2016 at 7:13 AM, Jim Kilborn <jim@xxxxxxxxxxx> wrote: > Thanks for the reply. > > > > When I said the compute nodes mounted the cephfs volume, I am referring to a real linux cluster of physical machines,. Openstack VM/ compute nodes are not involved in my setup. We are transitioning from an older linux cluster using nfs from the head node/san to the new cluster using cephfs. All physical systems mounting the shared volume. Storing home directories and data. > > > > http://oi63.tinypic.com/2ljp72v.jpg > > > > > > The linux cluster is in a NAT private network, where the only systems attached to the corporate network are the ceph servers and our main linux head node. They are dual connected. > > Your saying I cant have ceph volumes mounted and the traffic to the osds coming in on more than one interface? It is limited to one interface? Well, obviously clients connect to OSDs on the "public" network, right? The "cluster" network is used by the OSDs for replication. And as you've noticed, the monitors only use one address, and that needs to be accessible/routable for everybody. I presume you *have* a regular IP network on the OSDs that the clients can route? Otherwise they won't be able to access any data at all. So I think you just want to set up the monitors and the OSDs on the same TCP network... Otherwise there's a bit of a misunderstanding, probably because of the names. Consider "cluster" network to mean "OSD replication traffic" and "public" to mean "everything else, including all client IO". -Greg _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com