Re: [PATCH] dmaengine: idxd: Do not use devm for 'struct device' object allocation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2/23/2021 11:30 AM, Dan Williams wrote:
On Tue, Feb 23, 2021 at 10:11 AM Jason Gunthorpe <jgg@xxxxxxxxxx> wrote:
On Tue, Feb 23, 2021 at 10:05:58AM -0800, Dan Williams wrote:
On Tue, Feb 23, 2021 at 9:09 AM Jason Gunthorpe <jgg@xxxxxxxxxx> wrote:
On Tue, Feb 23, 2021 at 08:27:46AM -0700, Dave Jiang wrote:
On 2/23/2021 5:59 AM, Jason Gunthorpe wrote:
On Thu, Feb 18, 2021 at 02:31:54PM -0700, Dave Jiang wrote:
Remove devm_* allocation of memory of 'struct device' objects.
The devm_* lifetime is incompatible with device->release() lifetime.
Address issues flagged by CONFIG_DEBUG_KOBJECT_RELEASE. Add release
functions for each component in order to free the allocated memory at
the appropriate time. Each component such as wq, engine, and group now
needs to be allocated individually in order to setup the lifetime properly.
I really don't understand why idxd has so many struct device objects.

Typically I expect a simple driver to have exactly one, usually
provided by its subsystem.

What is the purpose?
When we initially designed this, Dan suggested to tie the device and
workqueue enabling to the Linux device model so that the enabling/disabling
can be done via bind/unbind of the sub drivers. So that's how we ended up
with all the 'struct device' under idxd and one for each configurable
component of the hardware device.
IDXD created its own little bus just for that?? :\
Yes, for the dynamic configurability of the queues and engines it was
either a pile of ioctls, configfs, or a dynamic sysfs organization. I
did dynamic sysfs for libnvdimm and suggested idxd could use the same
model.

It is really weird that something called a workqueue would show up in
the driver model at all.
It's a partition of the hardware functionality.
But to what end? What else are you going to do with a slice of the
IDXD device other than assign it to the IDXD driver?
idxd, unlike other dmaengines, has a dynamic relationship between
backend hardware engines and frontend submission queues. The
assignment of resources to queues is managed via sysfs. I think this
would be clearer if there were some more upstream usage examples
beyond dmaengine. However, consider one exploratory usage of
offloading memory copies in the pmem driver. Each pmem device could be
given a submission queue even if all pmem devices shared an engine on
the backend.

Is it for vfio? If so then why doesn't the vfio just bind to the WQ -
why does it have an aux device??
Hmm, Dave? Couldn't there be an alternate queue driver that attached
vfio? At least that's how libnvdimm and dax devices change
personality, they just load a different driver for the same device.

Would that work for a queue that's shared between multiple mdevs? And wasn't the main reason of pushing an aux dev under VFIO is so putting the code under the right maintainer for code review?





[Index of Archives]     [Linux Kernel]     [Linux ARM (vger)]     [Linux ARM MSM]     [Linux Omap]     [Linux Arm]     [Linux Tegra]     [Fedora ARM]     [Linux for Samsung SOC]     [eCos]     [Linux PCI]     [Linux Fastboot]     [Gcc Help]     [Git]     [DCCP]     [IETF Announce]     [Security]     [Linux MIPS]     [Yosemite Campsites]

  Powered by Linux