Re: How should libvirt apps enable virtio-pci for aarch64?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/07/2015 07:27 AM, Daniel P. Berrange wrote:
> On Sun, Dec 06, 2015 at 09:46:56PM -0500, Cole Robinson wrote:
>> Hi all,
>>
>> I'm trying to figure out how apps should request virtio-pci for libvirt + qemu
>> + arm/aarch64. Let me provide some background.
>>
>> qemu's arm/aarch64 original virtio support is via virtio-mmio, libvirt XML
>> <address type='virtio-mmio'/>. Currently this is what libvirt sets as the
>> address default for all arm/aarch64 virtio devices in the XML. Long term
>> though all arm virt will likely be using virtio-pci: it's faster, enables
>> hotplug, is more x86 like, etc.
>>
>> Support for virtio-pci is newer and not as widespread. qemu has had the
>> necessary support since 2.4 at least, but the guest side isn't well
>> distributed yet. For example, Fedora 23 and earlier don't work out of the box
>> with virtio-pci. Internal RHELSA (RHEL Server for Aarch64) builds have it
>> recently working AFAIK.
>>
>> Libvirt has some support for enabling virtio-pci with aarch64, commits added
>> by Pavel Fedin in v1.2.19. (See e8d55172544c1fafe31a9e09346bdebca4f0d6f9). The
>> patches add a PCIe controller automatically to the XML (and qemu commandline)
>> if qemu-system-aarch64 supports it. However virtio-mmio is still used as the
>> default virtio address, given the current lack of OS support.
>>
>> So we are at the point where libvirt apps want to enable this, but presently
>> there isn't a good solution; the only option is to fully allocate <address
>> type='pci' ...> for each virtio device in the XML. This is suboptimal for 2
>> reasons:
>>
>> #1) apps need to duplicate libvirt's non-trivial address type=pci allocation logic
>>
>> #2) apps have to add an <address> block for every virtio device, which is less
>> friendly than the x86 case where this is rarely required. Any XML device
>> snippets that work for x86 likely won't give the desired result for aarch64,
>> since they will default to virtio-mmio. Think virsh attach-device/attach-disk
>> commands
> 
> Yeah this is very undesirable for a default out of the box config - we should
> always strive to "do the best thing" when no address is given.
> 
>> Here are some possible solutions:
>>
>> * Drop the current behavior of adding a PCIe controller unconditionally, and
>> instead require apps to specify it in the XML. Then, if libvirt sees a PCIe
>> controller in the XML, default the virtio address type to pci. Apps will know
>> if the OS they are installing supports virtio-pci (eventually via libosinfo),
>> so this is the way we can implicitly ask libvirt 'allocate us pci addresses'
> 
> Yes, clearly we need to record in libosinfo whether an OS can do PCI vs
> MMIO.
> 
>> Upsides:
>> - Solves both the stated problems.
>> - Simplest addition for applications IMO
>>
>> Downsides:
>> - Requires a libvirt behavior change, no longer adding the PCIe controller by
>> default. But in practice I don't think it will really affect anyone, since
>> there isn't really any OS support for virtio-pci yet, and no apps support it
>> either AFAIK.
>> - The PCIe controller is not strictly about virtio-pci, it's for enabling
>> plain emulated PCI devices as well. So there is a use case for using the PCIe
>> controller for a graphics card even while your OS doesn't yet support
>> virtio-pci. In the big picture though this is a small time window with current
>> OS, and users can work around it by manually requesting <address
>> type='virtio-mmio'/>, so medium/long term this isn't a big deal IMO
>> - The PCIe controller XML is:
>>     <controller type='pci' index='0' model='pcie-root'/>
>>     <controller type='pci' index='1' model='dmi-to-pci-bridge'/>
>>     <controller type='pci' index='2' model='pci-bridge'/>
>> I have no idea if that's always going to be the expected XML, maybe it's not
>> wise to hardcode that in apps. Laine?
>>
>>
>> * Next idea: Users specify something like like <address type='pci'/> and
>> libvirt fills in the address for us.
>>
>> Upsides:
>> - We can stick with the current PCIe controller default and avoid some of the
>> problems mentioned above.
>> - An auto address feature may be useful in other contexts as well.
>>
>> Downsides:
>> - Seems potentially tricky to implement in libvirt code. There's many places
>> that check type=pci and key off that, seems like it would be easy to miss
>> updating a check and cause regressions. Maybe we could add a new type like
>> auto-pci to make it explicit. There's probably some implementation trick to
>> make this safe, but at first glance it looked a little dicey.
> 
> I'm not sure it is actually all that hairy - it might be as simple as
> updating only qemuAssignDevicePCISlots so that instread of:
> 
>     if (def->controllers[i]->info.type != VIR_DOMAIN_DEVICE_ADDRESS_TYPE_NONE)
>                 continue;
> 
> It handles type=pci with no values set too
> 

I think my fear was that there are other places in domain_conf that check for
ADDRESS_TYPE_PCI before we even get to assigning PCI slots. But i'll poke at it.

>> - Doesn't really solve problem #2 mentioned up above... maybe we could change
>> the address allocation logic to default to virtio-pci if there's already a
>> virtio-pci device in the XML. But it's more work.
>> - More work for apps, but nothing horrible.
> 
>> * Change the default address type from virtio-mmio to pci, if qemu supports
>> it. I'm listing this for completeness. In the short term this doesn't make
>> sense as there isn't any OS releases that will work with this default. However
>> it might be worth considering for the future, maybe keying off a particular
>> qemu version or machine type. I suspect 2 years from now no one is going to be
>> using virtio-mmio so long term it's not an ideal default.
> 
> Yeah, when QEMU starts doing versioned machine types for AArch64 we could
> do this, but then this just kind of flips the problem around - apps now
> need to manually add <address type="mmio"> for every device if deploying
> an OS that can't do PCI.  Admittedly this is slightly easier, since address
> rules for mmio are simpler than address rules for PCI.
> 
>> I think the first option is best (keying off the PCIe controller specified by
>> the user), with a longer term plan to change the default from mmio to pci. But
>> I'm not really sold on anything either way. So I'm interested if anyone else
>> has ideas.
> 
> I guess I'd tend towards option 1 too - only adding PCI controller if we
> actually want to use PCI with the guest.
> 

Sounds good, thanks for the comments.

- Cole

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list



[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]