在 2018/7/24 下午11:43, Andrea Bolognani 写道:
On Tue, 2018-07-24 at 17:15 +0200, Boris Fiuczynski wrote:
On 07/24/2018 03:41 PM, Andrea Bolognani wrote:
Being compatible with the existing PCI machinery is certainly a
good idea when it makes sense to do so, but I'm not quite convinced
that is the case here.
From a user perspective:
I take your example below and reduce it to pci only like this:
<controller type='pci' index='1' model='pci-bridge'/>
<hostdev mode='subsystem' type='pci' managed='no'>
<driver name='vfio'/>
<source>
<address domain='0xffff' bus='0x00' slot='0x00' function='0x0'/>
</source>
<address type='pci' domain='0x0000' bus='0x01' slot='0x1f'
function='0x0'/>
</hostdev>
This works on x86 as well as on s390 where as your suggested zpci
address type would not allow this. This is what I wanted to express with
the word "compatible".
As I wrote before: It would also be valid for a user to not care about
the attributes domain, bus, slot and function and reduce the specified
set of attributes to e.g. <address type='pci' uid='0xffff'/>
That's not really what users and management applications pass to
libvirt, though: a more realistic example would be
<hostdev mode='subsystem' type='pci'>
<driver name='vfio'/>
<source>
<address domain='0xffff' bus='0x00' slot='0x00' function='0x0'/>
</source>
</hostdev>
eg. you specify the host address and leave coming up with a
suitable guest address entirely up to libvirt, in which case
whether the resulting address is type=pci or type=zpci hardly
matters.
If you want to take device address assignment upon yourself, then
you're gonna have to assign addresses to controllers as well, not
to mention specify the entire PCI topology with all which that
entails... Not exactly a common scenario.
According to Cornelia's blog post on the subject, the PCI topology
inside the guest will be determined entirely by the IDs. Is there
even a way to eg. use bridges to create a non-flat PCI hierarchy?
Or to have several PCI devices share the same bus or slot?
If none of the above applies, then that doesn't look a whole lot
like PCI to me :)
Moreover, we already have several address types in addition to PCI
such as USB, virtio-mmio, spapr-vio, ccw... Adding yet another one
is not a problem if it makes the interface more sensible.
Sure you can add one more but wouldn't you end up with e.g. a hostdev
model vfio-pci with an address type of pci on all pci supporting
architectures but s390 where you need to use zpci? What would be the
benefit for the user or higher level management software? Actually I
would not like to introduce special handling unless required.
I'm all for offering users an interface that abstracts as many
platform-specific quirks as possible, but there's a balance to be
found and we should be careful not to lean too much the opposite
way.
With my current understanding, it doesn't look to me like zPCI
behaves similarly enough to how PCI behaves on other platforms
for us to sensibly describe both using the same interface, and
the fact that QEMU had to come up with a specific middleware
device seems to confirm my suspicion...
In any case, would you mind answering the questions below? That
would certainly help me gain a better understanding of the whole
issue.
More concrete questions: one of the zPCI test cases includes
<controller type='pci' index='1' model='pci-bridge'/>
<hostdev mode='subsystem' type='pci' managed='no'>
<driver name='vfio'/>
<source>
<address domain='0xffff' bus='0x00' slot='0x00' function='0x0'/>
</source>
<address type='pci' domain='0x0000' bus='0x01' slot='0x1f'
function='0x0' uid='0xffff' fid='0xffffffff'/>
</hostdev>
which translates to
-device zpci,uid=3,fid=2,target=pci.1,id=zpci3 \
-device pci-bridge,chassis_nr=1,id=pci.1,bus=pci.0,addr=0x1 \
-device zpci,uid=65535,fid=4294967295,target=hostdev0,id=zpci65535 \
-device vfio-pci,host=ffff:00:00.0,id=hostdev0,bus=pci.1,addr=0x1f \
How does the pci-bridge controller show up in the guest, if at all?
Qemu hides pci-bridge devices and just exposes pci devices to the guest.
In above example, indeed, qemu will generate a pci-bridge device and it will
be existing in pci topology. But the guest can't see it. This is very
special.
Do the bus= and addr= attributes of vfio-pci and pci-bridge in the
example above matter eg. for migration purposes?
Do you mean we leave attribute generation for bus and addr to qemu?
--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list