hosted VMs, VLANs, and firewalld

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



I'm looking for some information regarding the interaction of KVM,
VLANs, firewalld, and the kernel's forwarding configuration.  I would
appreciate input especially from anyone already running a similar
configuration in production.  In short, I'm trying to figure out if
a current configuration is inadvertently opening up traffic across
network segments.

On earlier versions of CentOS I've run HA clusters with and without
VMs (in this case, based on xen).  On those clusters, both the host
machine's IPs and the VM IPs were in the same subnet (call it the
DMZ).

In a CentOS 7 test HA cluster I'm building I want both traditional
services running on the cluster and VMs running on both nodes (not
necessarily under control of the cluster).  In the new setup, I'd
like to retain *some* VMs on the same subnet as the host machine's IP,
however have other VMs on different VLANs. So the physical topology
looks like this:

      ----------------- DMZ ------------------
      |                                      |
 bridged-if                             bridged-if
      |                                      |
   node-1 --------- heartbeat-if --------  node-2
      |                                      |
    --|--                                  --|--
   /     \                                /     \
 vlan2  vlan3                           vlan2  vlan3
   \     /                                \     /
 bridged-if                             bridged-if
      |                                      |
      ---------------          ---------------
                    |          |
                   managed switch
                   |            |
               vlan2-net    vlan3-net

A given VM will be assigned a single network interface, either in
the DMZ, on vlan2, or on vlan3.  Default routes for each of those
networks are essentially different gateways.  (The CentOS
boxes in questions are *not* intended to be routers.)

I'll take a brief aside here to describe the bridge/vlan configuration:

Interface Details
=================

  On the DMZ side, the physical interface is eno1 on which is layered
  bridge br0. br0 is assigned a static IP used by the physical node
  (host OS).  VMs that should be on the DMZ get assigned br0 as their
  underlying network device.

  On the other network side, the physical interface is enp1s0, on
  which is layered bridge br2, on which is layered VLAN devices
  enp1s0.2 and enp1s0.3.  None of these have IPs assigned in the host
  OS; The host is not supposed to have direct access to vlan2 or
  vlan3.   VMs that are supposed to be on vlan2 and vlan3 are assigned
  either enp1s0.2 or enp1s0.3, respectively, as their underlying network
  device.

=================

A quick test with a VM using enp1s0.2 seems to show the desired
connectivity.

However I'm looking at the firewalld configuration on the host nodes
and am not sure if I'm missing something.  There are currently two
active zones defined, 'dmz' and 'heartbeat'.  The 'heartbeat' zone
only contains the physical interface for the heartbeat network
between nodes which is fine.

The 'dmz' zone contains br0, br2, eno1, enp1s0, enp1s0.2, and enp1s0.3.
It looks like default that firewall rules aren't applied to bridge
devices so we can ignore those.  enp1s0 is an expected interface for
that zone.  Where it gets muddy is enp1s0, enp1s0.2 and enp1s0.3.  Since
the host shouldn't have any IPs on those interfaces, what is the
relevence of having them in the DMZ zone or another zone?  By having
them in the 'dmz' zone, does this mean that host firewall rules
will impact VMs?

Finally, `sysctl -a | grep forward | grep ' = 1'` shows:

net.ipv4.conf.all.forwarding = 1
net.ipv4.conf.br2.forwarding = 1
net.ipv4.conf.default.forwarding = 1
net.ipv4.conf.eno1.forwarding = 1
net.ipv4.conf.enp1s0.forwarding = 1
net.ipv4.conf.enp1s0/2.forwarding = 1
net.ipv4.conf.enp1s0/2.forwarding = 1
net.ipv4.conf.enp4s0.forwarding = 1
net.ipv4.conf.lo.forwarding = 1
net.ipv4.conf.virbr0.forwarding = 1
net.ipv4.conf.virbr0-nic.forwarding = 1
net.ipv4.ip_forward = 1

I understand that for bridging and vlans to work that I likely need
these forwardings active, but am I opening things up so that (for
example) a maliciously crafted packet seen on the enp1s0.2 interface
could jump onto the dmz subnet on eno1?

I have to admit, the firewall-config GUI seems more like it's oriented
to either the local machine or other machines behind NAT, rather than
a router.  (I don't want the host nodes generally acting as routers,
but how can I tell if they are doing so inadvertently?)

Further my google-fu isn't bringing up much in the way of definitive
information as to how all the pieces interact.  I'm hoping it is
the case that packets seen on the DMZ interface bound for vlan2 and
vlan3 are dropped, and that the host can't be reached via vlan2 or
vlan3, but it's not clear that this is the case.

Clues are welcome.

Devin

_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
https://lists.centos.org/mailman/listinfo/centos



[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux