On 04/01/2016 02:07 PM, Laine Stump wrote:
On 03/31/2016 06:43 PM, Peter Steele wrote:
I've created an EC2 AMI for AWS that essentially represents a CentOS
7 "hypervisor" image. I deploy instances of these in AWS and create
an number of libvirt based lxc containers on each of these instances.
The containers run fine within a single host and have no problem
communicating with themselves as well as with their host, and vice
versa. However, containers hosted in one EC2 instance cannot
communicate with containers hosted in another EC2 instance.
We've tried various tweaks with our Amazon VPC but have been unable
to find a way to solve this networking issue. If I use something like
VMware or KVM and create VMs using this same hypervisor image, the
containers running under these VMs can communicate with with each
other, even across different hosts.
What is the <interface> config of your nested containers? Do they each
get a public IP address?
Yes, they all have public IPs on the same subnet. When deployed in a VM
environment on premises, the containers have no problems. Amazon clearly
does something with the packets though and the containers can't talk to
each other.
My real question is has anyone tried deploying EC2 images that host
containers and have figured out how to successfully communicate
between containers on different hosts?
No experience with EC2, sorry.
I think we'll need to go to Amazon themselves to resolve this issue.
There is very little information out there about how to get lxc
containers to work properly in EC2.
_______________________________________________
libvirt-users mailing list
libvirt-users@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvirt-users