> From: Alex Williamson > Sent: Wednesday, November 20, 2019 6:58 AM > > On Fri, 15 Nov 2019 04:24:35 +0000 > "Tian, Kevin" <kevin.tian@xxxxxxxxx> wrote: > > > > From: Alex Williamson > > > Sent: Thursday, November 7, 2019 2:45 AM > > > > > > On Wed, 6 Nov 2019 12:20:31 +0800 > > > Zhenyu Wang <zhenyuw@xxxxxxxxxxxxxxx> wrote: > > > > > > > On 2019.11.05 14:10:42 -0700, Alex Williamson wrote: > > > > > On Thu, 24 Oct 2019 13:08:23 +0800 > > > > > Zhenyu Wang <zhenyuw@xxxxxxxxxxxxxxx> wrote: > > > > > > > > > > > Hi, > > > > > > > > > > > > This is a refresh for previous send of this series. I got impression > that > > > > > > some SIOV drivers would still deploy their own create and config > > > method so > > > > > > stopped effort on this. But seems this would still be useful for > some > > > other > > > > > > SIOV driver which may simply want capability to aggregate > resources. > > > So here's > > > > > > refreshed series. > > > > > > > > > > > > Current mdev device create interface depends on fixed mdev type, > > > which get uuid > > > > > > from user to create instance of mdev device. If user wants to use > > > customized > > > > > > number of resource for mdev device, then only can create new > mdev > > > type for that > > > > > > which may not be flexible. This requirement comes not only from > to > > > be able to > > > > > > allocate flexible resources for KVMGT, but also from Intel scalable > IO > > > > > > virtualization which would use vfio/mdev to be able to allocate > > > arbitrary > > > > > > resources on mdev instance. More info on [1] [2] [3]. > > > > > > > > > > > > To allow to create user defined resources for mdev, it trys to > extend > > > mdev > > > > > > create interface by adding new "aggregate=xxx" parameter > following > > > UUID, for > > > > > > target mdev type if aggregation is supported, it can create new > mdev > > > device > > > > > > which contains resources combined by number of instances, e.g > > > > > > > > > > > > echo "<uuid>,aggregate=10" > create > > > > > > > > > > > > VM manager e.g libvirt can check mdev type with "aggregation" > > > attribute which > > > > > > can support this setting. If no "aggregation" attribute found for > mdev > > > type, > > > > > > previous behavior is still kept for one instance allocation. And new > > > sysfs > > > > > > attribute "aggregated_instances" is created for each mdev device > to > > > show allocated number. > > > > > > > > > > Given discussions we've had recently around libvirt interacting with > > > > > mdev, I think that libvirt would rather have an abstract interface via > > > > > mdevctl[1]. Therefore can you evaluate how mdevctl would support > > > this > > > > > creation extension? It seems like it would fit within the existing > > > > > mdev and mdevctl framework if aggregation were simply a sysfs > > > attribute > > > > > for the device. For example, the mdevctl steps might look like this: > > > > > > > > > > mdevctl define -u UUID -p PARENT -t TYPE > > > > > mdevctl modify -u UUID --addattr=mdev/aggregation --value=2 > > > > > mdevctl start -u UUID > > > > Hi, Alex, can you elaborate why a sysfs attribute is more friendly > > to mdevctl? what is the complexity if having mdevctl to pass > > additional parameter at creation time as this series originally > > proposed? Just want to clearly understand the limitation of the > > parameter way. :-) > > We could also flip this question around, vfio-ap already uses sysfs to > finish composing a device after it's created, therefore why shouldn't > aggregation use this existing mechanism. Extending the creation > interface is a more fundamental change than simply standardizing an > optional sysfs namespace entry. > > > > > > > > > > > When mdevctl starts the mdev, it will first create it using the > > > > > existing mechanism, then apply aggregation attribute, which can > > > consume > > > > > the necessary additional instances from the parent device, or return > an > > > > > error, which would unwind and return a failure code to the caller > > > > > (libvirt). I think the vendor driver would then have freedom to > decide > > > > > when the attribute could be modified, for instance it would be > entirely > > > > > reasonable to return -EBUSY if the user attempts to modify the > > > > > attribute while the mdev device is in-use. Effectively aggregation > > > > > simply becomes a standardized attribute with common meaning. > > > Thoughts? > > > > > [cc libvirt folks for their impression] Thanks, > > > > > > > > I think one problem is that before mdevctl start to create mdev you > > > > don't know what vendor attributes are, as we apply mdev attributes > > > > after create. You may need some lookup depending on parent.. I think > > > > making aggregation like other vendor attribute for mdev might be the > > > > simplest way, but do we want to define its behavior in formal? e.g > > > > like previous discussed it should show maxium instances for > aggregation, > > > etc. > > > > > > Yes, we'd still want to standardize how we enable and discover > > > aggregation since we expect multiple users. Even if libvirt were to > > > use mdevctl as it's mdev interface, higher level tools should have an > > > introspection mechanism available. Possibly the sysfs interfaces > > > proposed in this series remains largely the same, but I think perhaps > > > the implementation of them moves out to the vendor driver. In fact, > > > perhaps the only change to mdev core is to define the standard. For > > > example, the "aggregation" attribute on the type is potentially simply > > > a defined, optional, per type attribute, similar to "name" and > > > "description". For "aggregated_instances" we already have the > > > mdev_attr_groups of the mdev_parent_ops, we could define an > > > attribute_group with .name = "mdev" as a set of standardized > > > attributes, such that vendors could provide both their own vendor > > > specific attributes and per device attributes with a common meaning > and > > > semantic defined in the mdev ABI. > > > > such standardization sounds good. > > > > > > > > > The behavior change for driver is that previously aggregation is > > > > handled at create time, but for sysfs attr it should handle any > > > > resource allocation before it's really in-use. I think some SIOV > > > > driver which already requires some specific config should be ok, > > > > but not sure for other driver which might not be explored in this > before. > > > > Would that be a problem? Kevin? > > > > > > Right, I'm assuming the aggregation could be modified until the device > > > is actually opened, the driver can nak the aggregation request by > > > returning an errno to the attribute write. I'm trying to anticipate > > > whether this introduces new complications, for instances races with > > > contiguous allocations. I think these seem solvable within the vendor > > > drivers, but please note it if I'm wrong. Thanks, > > > > > > > So far I didn't see a problem with this way. Regarding to contiguous > > allocations, ideally it should be fine as long as aggregation paths are > > properly locked similar as creation paths when allocating resources. > > It will introduce some additional work in vendor driver but such > > overhead is worthy if it leads to cleaner uapi. > > > > There is one open though. In concept the aggregation feature can > > be used for both increasing and decreasing the resource when > > exposing as a sysfs attribute, any time when the device is not in-use. > > Increasing resource is possibly fine, but I'm not sure about decreasing > > resource. Is there any vendor driver which cannot afford resource > > decrease once it has ever been used (after deassignment), or require > > at least an explicit reset before decrease? If yes, how do we report > > such special requirement (only-once, multiple-times, multiple-times- > > before-1st-usage) to user space? > > It seems like a sloppy vendor driver that couldn't return a device to a > post-creation state, ie. drop and re-initialize the aggregation state. might be hardware limitation too... > Userspace would always need to handle an aggregation failure, there > might be multiple processes attempting to allocate resources > simultaneously or the user might simply be requesting more resources > than available. The vendor driver should make a reasonable attempt to > satisfy the user request or else an insufficient resource error may > appear at the application. vfio-mdev devices should always be reset > before and after usage. the two scenarios are different. One is to let userspace know whether aggregation is supported, and any limitation. The other is to use the feature under claimed limitations and then includes error handling logic in case resource contention. > > > It's sort of like what Cornelia commented about standardization > > of post-creation resource configuration. If it may end up to be > > a complex story (or at least take time to understand/standardize > > all kinds of requirements), does it still make sense to support > > creation-time parameter as a quick-path for this aggregation feature? :-) > > We're not going to do both, right? We likely lock ourselves into one > schema when we do it. Not only is the sysfs approach already in use in > vfio-ap, but it seems more flexible. Above you raise the issue of > dynamically resizing the aggregation between uses. We can't do that > with only a creation-time parameter. With a sysfs parameter the vendor yes, because creation-time parameter is one-off. > driver can nak changes, allow changes when idle, potentially even allow > changes while in use. Connie essentially brings up the question of how > we can introspect sysfs attribute, which is a big question. Perhaps we > can nibble off a piece of that question by starting with a namespace > per attribute. For instance, rather than doing: > > echo 2 > /sys/bus/mdev/devices/UUID/mdev/aggregation > > We could do: > > echo 2 > /sys/bus/mdev/devices/UUID/mdev/aggregation/value > > This allows us the whole mdev/aggregation/* namespace to describe other > attributes to expose aspects of the aggregation support. Thanks, > en, this sounds a better option. We can start with one attribute (value) and extend to cover any possible restriction in the future. One note to Zhenyu - with this approach at least you should prepare for both increasing and decreasing resource through 'value' in GVT-g driver. Thanks Kevin