On Mon, 2015-12-07 at 06:48 -0800, Gregory Farnum wrote: <snip> > >> I'm probably just being dense here, but I don't quite understand what > >> all this is trying to accomplish. It looks like it's essentially > >> trying to set up VLANs (with different rules) over a single physical > >> network interface, that is still represented to userspace as a single > >> device with a single IP. Is that right? > > > > That's almost what it is, with two differences: > > 1) there are separated route tables per VLAN, > > 2) Each VLAN interface (public, cluster) has its own address. > > Okay, but if each interface has its own interface, why do you need > Ceph to do anything at all? You can specify the public and cluster > addresses, they'll bind to the appropriate interface, and then you can > do stuff based on the interface/VLAN it's part of. Right? > -Greg Yes. And in the generic case: almost good enough. In the case I'm discussing, with also separate Linux kernel routing tables, we need to steer the route lookups that happens once the tcp stack has performed its packetization, into the correct table. Depending on how Ceph behaves with interface/IP binding for outbound connections, this may be easy! I.e. if Ceph binds to the specific address, not only on the listening socket, but also when creating outbound sockets, we can create "ip rule"'s that uses the source address and AFAIU "we're home" - at this level. Do you know if this is how Ceph manages the sockets in this case? But if we instead end up with the kernel trying to figure which address to use ( https://tools.ietf.org/html/rfc6724 ), it gets a whole lot trickier. For monitors that live only on the public network (as per documentation), the situation is simpler; we can mark traffic outside of Ceph using e.g. iptables. /Martin -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html