Mark Fox a écrit : > Neal Murphy <neal.p.murphy <at> alum.wpi.edu> writes: > >> If you're talking about VMs on a single Linux host talking through a bridge >> (virtual LAN) on that Linux host, then you can probably use ebtables to >> control the bridge because, again, the Linux host will not see IP traffic >> between VMs. This is of course wrong. The host does the job of passing packets to and from VMs, so it has to see the traffic. > My understanding was that a bridge was a layer 2 device and netfilter would > be completely out of the loop on traffic travelling across the bridge. Not if the kernel has BRIDGE_NETFILTER=y. Then the various net.bridge.bridge-nf-* sysctls control which kind of traffic is passed to conntrack, iptables, ip6tables or arptables. By default all is passed. > So I > disabled all forwarding on the container host, but was surprised when that > cut the containers off. What do you mean exactly by "I disabled all forwarding" ? Setting net.ipv4.ip_forward=0 or net.ipv4.conf.*.forwarding=0 should have no effect on bridged traffic. However iptables' DROP or REJECT may have an effect on IPv4 bridged packets when net.bridge.bridge-nf-call-iptables=1. > I don't get the impression that this is specific to containers. It is not. It is specific to Linux bridge. > There is documentation > saying that one should do a 'iptables -I FORWARD -m physdev > --physdev-is-bridged -j ACCEPT' to allow traffic between a host and any KVM > guests. It is simpler and more efficient to disable passing bridged IPv4 packets to iptables with net.bridge.bridge-nf-call-iptables=0. -- To unsubscribe from this list: send the line "unsubscribe netfilter" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html