On 04/02/2016 05:20 PM, Laine Stump wrote:
You say they can talk among containers on the same host, and with their own host (I guess you mean the virtual machine that is hosting the containers), but not to containers on another host. Can the containers communicate outside of the host at all? If not, perhaps the problem is iptables rules for the bridge device the containers are using - try running this command: sysctl net.bridge.bridge-nf-call-iptables If that returns: net.bridge.bridge-nf-call-iptables = 1 then run this command and see if the containers can now communicate with the outside: sysctl -w net.bridge.bridge-nf-call-iptables=0
This key doesn't exist in the CentOS 7 image I'm running. I do have a bridge interface defined of course, although we do not run iptables. We don't need this service when running our software on premise. Actually, in CentOS 7 the iptables service doesn't exist; there's a new service called firewalld that serves the same purpose. We don't run this either at present.
Well, if they've allowed your virtual machine to acquire multiple IP addresses, then it would make sense that they would allow them to actually use those IP addresses. I'm actually more inclined to think that the packets simply aren't getting out of the virtual machine (or the responses aren't getting back in).
The difference is that the virtual machine itself isn't assigned the IPs but rather containers under the AWS instance and something with how Amazon manages their stack prevents the packets from one container to the other. The very fact that the exact same software runs fine in VMs under say VMware or KVM but not VMs under AWS clearly points to AWS as the ultimate source of the problem.
_______________________________________________ libvirt-users mailing list libvirt-users@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/libvirt-users