On 12/31/20 12:49 PM, Grant Taylor wrote:
To me, the biggest question is what type of interfaces you are using. Are you moving a physical interface from the host into the network namespace / container? Or are you using a logical interface from the network namespace / container and possibly extending it to a physical in the host via something like bridging. (MACVLAN and IPVLAN play in this area.)
My network namespaces / ""containers use vEth links to interconnect things. But I could also move physical NICs from the host network namespace into the guest (?) network namespace / ""container.
I could create logical NICs; (802.1Q) VLAN / MACVLAN / IPVLAN / etc. and move them into the network namespace / ""container. -- I have done exactly this at work.
I think that I can also create tunnel interfaces and move them into the network namespace / ""container. -- I have not tried this. The tunnel may need to be created inside the network namespace / ""container.
Deciding how to connect the network namespace / ""container to the outside world is extremely important. You need to have a good understanding of what you are wanting to do and how to achieve your goal.
This is where I start to see things like Docker fall down. -- Maybe it's my limited understanding of Docker / Podman / et al. -- My understanding is that many traditional container systems tend to use independent networks, routing, and NATing. This works for some things. But it does not work for everything. Especially when you want L2 connectivity, like when you want to use a ""container as a router for other LAN things.
I think that some container orchestration systems do provide a way to get a layer 2 connection into the container. However, doing so is an exception and against their design methodology, thus you start at a disadvantage.
`-- Grant. . . . unix || die
Attachment:
smime.p7s
Description: S/MIME Cryptographic Signature