Re: Routing / forwarding in user space?

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Pre-Script: I need to give some history of time keeping and clock making before I tell you what time it is.

On 12/31/20 12:49 PM, Grant Taylor wrote:
Absolutely.  I've got nine of these ""containers running on the system that I'm typing this reply on.

Here are some more details on what I'm doing in case you want to try something similar.

I have allocated the RFC 6598 Shared Address Space[1] to my workstation test VLANs, currently on my workstation. My home network has routes to 100.64.0.0/10 via my workstation's LAN IP.

100.64.0.0/24 is the core / backbone / area 0 of these lab VLANs.

Each lab VLAN has a separate /24 therein.
   Lab 1 = 100.64.1.0/24
   Lab 2 = 100.64.2.0/24
   Lab 3 = 100.64.3.0/24
   ...

My workstation has routes to the lab subnets via each ""container (network namespace) that is doing the very type of routing that I think you're asking about.

   100.64.1.0/24 via 100.64.0.1
   100.64.2.0/24 via 100.64.0.2
   100.64.3.0/24 via 100.64.0.3
   ...

I am using logical (vEth) interfaces between all the network namespaces / ""containers. -- I do tuck most of them away in another network namespace / ""container so that I don't see a bunch of ... unsighly interfaces when running "ip" / "ifconfig" / et al. in my host / root / unnamed network namespace.

I have a vEth from the host into what I call lab0. Each of the other routing network namespaces / ""contianers have a vEth to lab0 and to the host. lab0 bridges all of the vEths therein to create one broadcast domain that connects the host and all of the lab network namespaces / ""containers.

This means that each network namespace / "" container can route between it's vEth that connects to the bridge and the vEth that connects back to the host.

   lab1 routes between 100.64.0.1/24 and 100.64.1.254/24
   lab2 routes between 100.64.0.2/24 and 100.64.2.254/24
   lab3 routes between 100.64.0.3/24 and 100.64.3.254/24
   ...

The purpose for these routing network namespace / ""containers is so that I can mess around with various things in VirtualBox (et al.) on the host and have access to 11 different networks (home LAN, virtual backbone, and each lab network). This enables me to play with various things using network namespaces / "" containers as routers.

I have had as many as 100 of these running on my system at one time with no ill effect. (Obviously the VMs connected to them have an effect. But that's not the network namespaces / "" containers.)

/*
** What time is it?
*/

I create all of this with a 25 line shell script.

1)  I create the directories (transient b/c of tmpfs) that are needed.
A) "ip netns" uses /run/netns so I create it and mountns & utsns following suit.
        # sudo mkdir -p /run/{mount,net,uts}ns
    B)  Network namespaces / ""containers use their own mount point.
        # sudo touch /run/{mount,net,uts}ns/lab0
2)  I create / instantiate the first network namespace / ""container.
# unshare -mount=/run/mountns/lab0 --net=/run/netns/lab0 --uts=/run/utsns/lab0 /bin/hostname lab0

Aside: unshare creates / instantiates the network namespace / ""container to run the /bin/hostname command. It does not destroy the namespace / "" container -- which is default -- because of the mountpoints. See the man page for more details.

3)  I create the vEth pair to connect from the host to lab0
        # sudo ip link add lab0 type veth peer name $HOSTNAME netns lab0
        # sudo ip link set lab0 up
        # sudo ip netns exec lab0 ip link set lo up
        # sudo ip netns exec lab0 ip link add bri0 type bridge
        # sudo ip netns exec lab0 ip link set bri0 up
        # sudo ip netns exec lab0 ip link $HOSTNAME master bri0
        # sudo ip netns exec lab0 ip link $HOSTNAME up

Steps 1-3 create the central netns.

4) I create / instantiate and configure the network on the other lab network namespaces / ""containers all at the same time via a loop.
        # for l in {1..9}; do
        #    sudo touch /run/{mount,net,uts}ns/lab${l}
# sudo unshare -mount=/run/mountns/lab${l} --net=/run/netns/lab${l} --uts=/run/utsns/lab${l} /bin/hostname lab${l} # sudo ip link add lab${l} type veth peer name lab${l}i netns lab${l}
        #    sudo ip link set lab${l} up
# sudo sysctl -q net.ipv6.conf.lab${l}.disable_ipv6=1 > /dev/null
        #    sudo ip netns exec lab${l} ip link set lo up
        #    sudo ip netns exec lab${l} ip link set lab${l}o up
# sudo ip netns exec lab${l} ip addr add 100.64.0.${l}/24 dev lab${l}o
        #    sudo ip netns exec lab${l} ip link set lab${l}i up
# sudo ip netns exec lab${l} ip addr add 100.64.${l}.254/24 dev lab${l}i
        # done

Note:  I manually retyped this, so there may be typos.

Aside: I've not yet configured IPv6 in the labs, so I disable it. (My home LAN is IPv6 enabled.)

This provides nine L2 lab# interfaces on the host so that I can connect VMs to them. The host does /not/ have IP addresses in these lab VLANs. The host must route through the lab# network namespaces / ""containers to get to attached VMs. Said VMs must do similar to access the host and the Internet.

I believe these network namespaces / ""containers are exactly what you're wanting to do; e.g. routing between network inside of a network namespace / ""container.

[1] Yes, I know the danger of conflict with ISPs that do Carrier Grade NAT. Mine does not. So I choose to use this space to avoid typical RFC 1918 Address Space.



--
Grant. . . .
unix || die

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature


[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux