I've asked this question on the Docker support forum (no email available!) but have seen no answers. If there are other better places to ask, please let me know. I have a Fedora 23 host (to be upgraded to F25 or F26 in the near future). It has two networks and I want to run a Docker container that can participate in both of them. The host defines these two networks (and a gateway on a third network), $ route -n Destination Gateway Genmask ... Iface default w.x.y.254 0.0.0.0 ... eth0 a.b.c.0 0.0.0.0 255.255.255.0 ... eth2 e.f.g.0 0.0.0.0 255.255.255.0 ... eth3 So far for Docker, I have, (1) $ docker network create --subnet a.b.c.0/24 --gateway a.b.c.254 eth2 (2) $ docker network create --subnet e.f.g.0/24 --gateway e.f.g.254 eth3 (3) $ docker create --network=eth2 --ip=a.b.c.1 container program (4) $ docker network connect --ip=e.f.g.1 eth3 container (5) $ docker start container # (With a few more arguments in real life.) This gets me most of what I want except the ability to actually participate in the host's networks. Traffic doesn't leave the container. I understand I must somehow map the container's networks to the host's networks. But I'm having trouble learning how to do that. I can map the first network, I think. Instead of line (3) above, I try, (6) $ docker create --network=eth2 --ip=a.b.c.1 \ -p a.b.c.d:1-65535:1-65535/tcp -p a.b.c.d:1-65535:1-65535/udp \ container program But I don't see how to map the second network since the -p option is not available for "docker network connect" (or for "docker start"). Of course, using -p might not be the proper solution, anyway. Is it possible to do what I want? I'm using docker-engine-1.12.6-1.fc23.x86_64 and associated packages from Docker, not Fedora's own Docker packages, because the latest available from Fedora for F23 is Docker 1.10. -- Dave Close "Age is a very high price to pay for maturity." -- Tom Stoppard _______________________________________________ users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx