Le 19/12/2014 02:18, Craig Lewis a écrit : > The daemons bind to *, Yes but *only* for the OSD daemon. Am I wrong? Personally I must provide IP addresses for the monitors in the /etc/ceph/ceph.conf, like this: [global] mon host = 10.0.1.1, 10.0.1.2, 10.0.1.3 Or like this: [mon.1] mon addr = 10.0.1.1 [mon.2] mon addr = 10.0.1.2 [mon.3] mon addr = 10.0.1.3 And every time, the monitors daemons bind to just only one address. And if a ceph client want to contact the cluster, it must contact monitors. Here is my problem: monitors just listen in the 10.0.1.0/24 network but not in 10.0.2.0/24. Do you have "monitor" daemons that bind to * ? Personally I don't (always just one interface). Is it possible to provide 2 IP addresses for monitors in the /etc/ceph/ceph.conf file? > so adding the 3rd interface to the machine will > allow you to talk to the daemons on that IP. The 3rd interface exists since the begin (before the creation of the cluster) but monitors bind to only one interface. > I'm not really sure how you'd setup the management network though. I'd > start by setting the ceph.conf public network on the management nodes to > have the public network 10.0.2.0/24, and an /etc/hosts file with the > monitor's names on the 10.0.2.0/24 network. > > Make sure the management nodes can't route to the 10.0.1.0/24 network, and > see what happens. For now, I can't have monitors that bind to 10.0.1.[123] *and* 10.0.2.[123]. > Do you really plan on having enough traffic creating and deleting RDB > images that you need a dedicated network? It seems like setting up link > aggregation on 10.0.1.0/24 would be simpler and less error prone. This is not for traffic. I must have a node to manage rbd images and this node is in a different VLAN (this is an Openstack install... I try... ;). -- François Lafont _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com