Re: Networking options in libvirt_lxc

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Apr 03, 2013 at 10:04:20AM +0000, Purcareata Bogdan-B43198 wrote:
> Hello,
> 
> I am doing some research on [subject] and I would like to find out some information regarding various scenarios. I've studied the official documentation at [1] and some of the mailing list archives. The configurations I have in mind are somewhat inspired by what the sf LXC package offers in terms of networking.
> 
> What I've tested so far and works:
> - Shared networking - all host interfaces are present in the container if no <interface> tag has been specified in the domain configuration. I'm assuming this is because the container is started in the same network namespace like the host. Is it possible to make only a subset of these interfaces visible inside the container?

Yes, if no <interface> is listed, we do not enable network namespaces.
You can force network namespaces by setting

  <features>
    <privnet/>
  </features>

which will mean all you get is a loopback device.

If you need to make a subset of host interfaces visible, you'd need to
use <privnet/> and then the (not yet implemented) <hostdev> mode you
describe at the end.

> - Bridge to LAN - connecting a domain interface to a host bridge;
> - Direct attachment through a Macvtap device - all 3 modes (vepa, bridge and private) work as expected, "passthrough" requires some capabilities in the physical device (SRIOV), which I don't have - assuming I have a device with this capability, is this configuration supported by (implemented in) the libvirt_lxc driver?

You don't need SRIOV for mode='passthrough' - it works with any host nic.
SRIOV just makes it more useful if you need to run lots of guests and each
needs its own NIC, since with mode='passthough' you have a 1:1 mapping between
NICS & guests, whereas with the other macvtap modes you have a 1:N mapping.

> What other scenarios I would be interested in:
> - host network interface private to the container - much like what lxc.network.phys is offering: "dedicated NIC from host passed through". I've read some documentation about <hostdev> and how to assign PCI devices to virtual machine, but I understand this is only possible with KVM - it's assigned from the kernel, it makes more sense, etc. However, I've also read a thread on the mailing list regarding <hostdev mode="capabilities">, which offers access from a container to a device, but it's currenly only applicable to block and character devices. Is there currently any way to make a host interface private to a container? 

That isn't currently support, but we could easily wire that up with
<hostdev mode='capabilities'>. It just requires a simple API call
to move the host NIC to the containers' namespace


Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list




[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]