On 04/14/2016 03:35 PM, Peter Steele wrote:
On 04/12/2016 01:37 PM, Peter Steele wrote:
On 04/11/2016 11:33 AM, Laine Stump wrote: I wouldn't be too quick to
judgement. First take a look at tcpdump on the bridge interface that
the containers are attached to, and on the ethernet device that
connects the bridge to the rest of Amazon's infrastructure. If you
see packets from the container's IP going out but not coming back in,
check the iptables rules (again - firewalld uses iptables to setup
its filtering) for a REJECT or DISCARD rule that has an incrementing
count. I use something like this to narrow down the list I need to
check:
while true; do iptables -v -S -Z | grep -v '^Zeroing' | grep -v "c 0
0" | grep -e '-c'; echo '**************'; sleep 1;
If you don't see any REJECT or DISCARD rules being triggered, then
maybe the problem is that AWS is providing an IP address to your
container's MAC, but isn't actually allowing traffic from that MAC
out onto the network.
I'll get this test setup. Unfortunately I'm not particularly
knowledgeable with iptables; we don't use it in our product so I've
never had to deal with it. I think you are right though about what's
happening--AWS doesn't recognize the MAC addresses for containers
running under another instance.
I did this test and there were no REJECT or DISCARD rules being
triggered. I did discover something interesting though. I had two AWS
instances running with some libvirt containers on each. I did a ping
from one AWS instance to an IP assigned to a container on another AWS
instance. The ping failed, and when I checked the source host's arp
table the mac address that was recorded for the container being pinged
was that of the container's host instance's br0 interface, not the mac
address of the container's eth0 interface.
Doing the same test on premise using KVM based instances, when a ping
was run from one VM to a container hosted on another VM, the arp table
of the source VM contained the mac address of the eth0 interface bound
to the container, not the mac address of its host VM.
This indicates to me that AWS thinks all of the IP addresses that have
been allocated to an instance will be bound to that instance and it
doesn't try to go any further than that. I'm not exactly sure how to
get AWS to route these addresses properly, but it doesn't seem to be
an issue with libvirt per se.
Peter
I finally got this to work, using proxy arp. I just need to apply the
following settings on each EC2 instance:
echo 1 > /proc/sys/net/ipv4/conf/br0/forwarding
echo 1 > /proc/sys/net/ipv4/conf/br0/proxy_arp_pvlan
echo 1 > /proc/sys/net/ipv4/conf/br0/proxy_arp
echo 0 > /proc/sys/net/ipv4/conf/all/send_redirects
echo 0 > /proc/sys/net/ipv4/conf/br0/send_redirects
With these settings my containers and hosts have full connectivity and
behave just like they are on the same subnet on-premise. This works for
CentOS 7 at least, but I assume the same solution would work for Ubuntu.
Peter
_______________________________________________
libvirt-users mailing list
libvirt-users@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvirt-users