On Tue, Nov 19, 2019 at 9:16 PM Jason Wang <jasowang@xxxxxxxxxx> wrote:
On 2019/11/19 下午10:14, Jason Gunthorpe wrote:
> On Tue, Nov 19, 2019 at 10:02:08PM +0800, Jason Wang wrote:
>> On 2019/11/19 下午8:38, Jason Gunthorpe wrote:
>>> On Tue, Nov 19, 2019 at 10:41:31AM +0800, Jason Wang wrote:
>>>> On 2019/11/19 上午4:28, Jason Gunthorpe wrote:
>>>>> On Mon, Nov 18, 2019 at 03:27:13PM -0500, Michael S. Tsirkin wrote:
>>>>>> On Mon, Nov 18, 2019 at 01:41:00PM +0000, Jason Gunthorpe wrote:
>>>>>>> On Mon, Nov 18, 2019 at 06:59:21PM +0800, Jason Wang wrote:
>>>>>>>> +struct bus_type mdev_virtio_bus_type;
>>>>>>>> +
>>>>>>>> +struct mdev_virtio_device {
>>>>>>>> + struct mdev_device mdev;
>>>>>>>> + const struct mdev_virtio_ops *ops;
>>>>>>>> + u16 class_id;
>>>>>>>> +};
>>>>>>> This seems to share nothing with mdev (ie mdev-vfio), why is it on the
>>>>>>> same bus?
>>>>>> I must be missing something - which bus do they share?
>>>>> mdev_bus_type ?
>>>>>
>>>>> Jason
>>>> Note: virtio has its own bus: mdev_virtio_bus_type. So they are not the same
>>>> bus.
>>> That is even worse, why involve struct mdev_device at all then?
>>>
>>> Jason
>>
>> I don't quite get the question here.
> In the driver model the bus_type and foo_device are closely
> linked.
I don't get the definition of "closely linked" here. Do you think the
bus and device implement virtual bus series are closely linked? If yes,
how did they achieve that?
> Creating 'mdev_device' instances and overriding the bus_type
> is a very abusive thing to do.
RJM>] abusive is a subjective term. Looking at the whole context of the vDPA framework, I still believe that extending the mdev API is preferable. Without the framework, every vendor would have to "mediate" their own devices, which would force us to effectively "duplicate" the mdev core code and implement our own functionality on top. The core idea of VIRTIO is to have a common interface and having a framework that also supports a lot of commonality is fantastic, since we (hw vendors) too, really want to get out of the business of crafting/verify/maintaining device drivers for every version of Linux/Windows/... Heck, i',m hoping that a generic sample vDPA parent driver (ie: sort of like Intel's IFCVF driver but even more so) would be good enough for our product such that we (Brcm) don't have to supply any driver.
Ok, mdev_device (without this series) had:
struct mdev_device {
struct device dev;
struct mdev_parent *parent;
guid_t uuid;
void *driver_data;
struct list_head next;
struct kobject *type_kobj;
struct device *iommu_device;
bool active;
};
So it's nothing bus or VFIO specific. And what virtual bus had is:
struct virtbus_device {
const char *name;
int id;
const struct virtbus_dev_id *dev_id;
struct device dev;
void *data;
};
Are there any fundamental issues that you think mdev_device is abused? I
won't expect the answers are generic objects like kobj, iommu device
pointer etc.
>
>> My understanding for mdev is that it was a mediator between the driver and
>> physical device when it's hard to let them talk directly due to the
>> complexity of refactoring and maintenance.
> Really, mdev is to support vfio with a backend other than PCI, nothing
> more.
That partially explain why it was called mdev. So for virito, we want
standard virtio driver to talk with a backend other than virtio.
For the issue of PCI, actually the API is generic enough to support
device other than PCI, e.g AP bus.
>
> Abusing it for other things is not appropriate. ie creating an
> instance and not filling in most of the vfio focused ops is an abusive
> thing to do.
Well, it's only half of the mdev_parent_ops in mdev_parent, various
methods could be done do be more generic to avoid duplication of codes. No?
>
>> hardware that can offload virtio datapath but not control path. We want to
>> present a unified interface (standard virtio) instead of a vendor specific
>> interface, so a mediator level in the middle is a must. For virtio driver,
>> mediator present a full virtio compatible device. For hardware, mediator
>> will mediate the difference between the behavior defined by virtio spec and
>> real hardware.
> If you need to bind to the VFIO driver then mdev is the right thing to
> use, otherwise it is not.
>
> It certainly should not be used to bind to random kernel drivers. This
> problem is what this virtual bus idea Intel is working on might solve.
What do you mean by random here? With this series, we have dedicated bus
and dedicated driver with matching method to make sure the binding is
correct.
RJM>] I think it's pretty clear that it's not random. The class id takes care of the match and allows flexibility to choose vhost-mdev vs vitrio-mdev, depending if the deployment is bare-metal vs virtualized.
>
> It seems the only thing people care about with mdev is the GUID
> lifecycle stuff, but at the same time folks like Parav are saying they
> don't want to use that lifecycle stuff and prefer devlink
> instead.
I'm sure you will need to handle other issues besides GUID which had
been settled by mdev e.g IOMMU and types when starting to write a real
hardware driver.
>
> Most likely, at least for virtio-net, everyone else will be able to
> use devlink as well, making it much less clear if that GUID lifecycle
> stuff is a good idea or not.
This assumption is wrong, we hard already had at least two concrete
examples of vDPA device that doesn't use devlink:
- Intel IFC where virtio is done at VF level
- Ali Cloud ECS instance where virtio is done at PF level
Again, the device slicing is only part of our goal. The major goal is to
have a mediator level that can take over the virtio control path between
a standard virtio driver and a hardware who datapath is virtio
compatible but not control path.
Thanks
>
> Jason
_______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel