Re: Virtual DMA channels and physical DMA engines

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Vinod,

Yes, some kind of policy filter is precisely what I want. I was thinking about this last night and believe I have come up with a scheme that effectively does this via the channel resource alloc callback that occurs when the framework attempts to acquire a channel. Since control comes back to my driver at this stage I can determine if I really want to give up the specific channel the framework is trying to allocate. Obviously within my driver I know all of my DMA channels and devices that are available and in use. So, I can devise a policy in my driver and decide which channel is the next one to give out, and only when the framework requests that specific channel would my channel resource callback succeed. I will flesh this out some more.

Thanks,
Eric




Sent from my iPhone

>> On May 25, 2018, at 2:28 AM, Vinod <vkoul@xxxxxxxxxx> wrote:
>> 
>> On 24-05-18, 15:42, Dave Jiang wrote:
>> 
>> 
>>> On 05/24/2018 03:38 PM, Eric Pilmore wrote:
>>> Hi!
>>> 
>>> (Sorry if duplicate. Trying to remove HTML content.).
>>> 
>>> Wondering if somebody out there might be able to help me. I have a
>>> situation where I have a pool of physical DMA engines, with each
>>> appearing as a separate DMA device in the system, on top of which I
>>> want to create a much larger number of virtual DMA channels. So, I
>>> want to export a large number of virtual DMA channels which are
>>> serviced by a smaller pool of DMA engines.
>>> 
>>> Now one way that I see how this could be addressed is that for each
>>> instance of a DMA engine in the pool I instantiate say V number of
>>> virtual DMA channels to be associated with the given instance. So, if
>>> the size of my pool of DMA engines is P, then in the end I'll end up
>>> PxV virtual DMA channels available to clients.
>>> 
>>> My concern is, as clients come in and do dma_request_channel() calls,
>>> when the (virtual) DMA channels are handed out it appears that they'll
>>> be handed out sequentially (ignoring for the moment the whole cpu node
>>> thing). Thus, all the V channels of DMA engine instance 0 will be
>>> handed out before the subsystem gets around to handing out channels
>>> for the next DMA engine, instance 1, and so on. As such, I'll end up
>>> with a scenario where my allocated virtual channels are not evenly
>>> distributed across my pool of P DMA engines. Is there a mechanism to
>>> more evenly distribute the allocation of DMA channels to clients
>>> across a pool of DMA engines?  So, if only P number of (virtual) DMA
>>> channels were to be allocated to clients, then the assignment to
>>> physical DMA engines would end up being 1-1, rather than P channels
>>> all being serviced by one physical DMA engine.
>>> 
>>> Is there something in the Linux DMA framework that might address this?
>>> I've been digging through what documentation I can find and the
>>> source code, and not seeing anything. Do I need to "hide" the physical
>>> DMA engines into appearing as one, and then within my driver manage
>>> the round-robin allocation of them to virtual channels myself?
>> 
>> I don't know the answer to your problem but it's something I'm
>> definitely interested in the answer to. I wonder if the vdma stuff need
>> to be plumbed to be NUMA aware and all that fun stuff and provide the
>> same ability for requests as the current dma_request_channel() calls for
>> physical DMAs.
> 
> Would having a filter which would allocate the channel based on some policy help
> here. So in this case Eric wants the channels to be evenly distributed on
> controllers available.
> 
> -- 
> ~Vinod
--
To unsubscribe from this list: send the line "unsubscribe dmaengine" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Kernel]     [Linux ARM (vger)]     [Linux ARM MSM]     [Linux Omap]     [Linux Arm]     [Linux Tegra]     [Fedora ARM]     [Linux for Samsung SOC]     [eCos]     [Linux PCI]     [Linux Fastboot]     [Gcc Help]     [Git]     [DCCP]     [IETF Announce]     [Security]     [Linux MIPS]     [Yosemite Campsites]

  Powered by Linux