Re: Regarding patch: dmaengine: remove DMA_SG as it is dead code in kernel

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 4/17/2018 5:52 PM, Dave Jiang wrote:
> 
> 
> On 04/17/2018 02:31 PM, Sinan Kaya wrote:
>> On 4/17/2018 5:21 PM, Dave Jiang wrote:
>>>> Given we have few users of DMA Engine, I believe we should try to keep as
>>>> much functionality as possible in the kernel to allow new features to be
>>>> developed rather than limiting people's choices.
>>>>
>>> Would love to see more in kernel consumers for dmaengine. Maybe things
>>> that help accelerate core kernel bits? :)
>>>
>>
>> Side conversation...
>>
>> I'm on the same boat. I have been thinking about this for some time.
>> I also know that Intel maintains a repo in sourceforge for experimental
>> work that was never upstreamed.
>>
>> What kind of things could be useful for DMA engine optimization?
> 
> 1. async_tx removal because it's kind of broken.
> 2. channel hotplug in case future hardware has virtual channels instead
> of physical fixed?
> 3. device hotplug? idk
> 4. maybe instead of having a giant group of dmaengine ops, we move it
> towards a single API call and pass in command context? Perhaps a
> resemblance to blk-mq type of structures? That way, every time we have a
> new op, we don't need to add yet another function pointer. We also have
> a source SG and a destination SG. That way it should cover virtual or
> DMA memory right? I wonder how it would work for RAID operations though....
> 
> Just random ideas I'm throwing out there.

I was going to add SG support then I dropped my work expecting your series
to be merged. I never got back to SG support since then.

> 
>>
>> I was told that most of the kernel data structures are resident and cannot
>> be paged but then I hear about kernel virtual memory allocated by vmalloc
>> that it makes me nervous. 
>>
>> What worked until now and what failed?
> That was the series. After doing some testing, we didn't see the
> performance we wanted with IOATDMA so we decided to put that on hold for
> now.
> https://lists.01.org/pipermail/linux-nvdimm/2017-August/011962.html
> 

Interesting, let me take a look. Maybe, I'll get some inspiration :)

>>
>> I understand that pinning pages is the biggest challenge. What else?
>>
> 
> Given that some of the qcom DMA engines can do virtual memory, that
> means you won't have to worry about pinning pages? Do the DMA engines do
> page faulting for you with the virtual address? That would make pinning
> pages a non-issue no?
> 

There is an IOMMU in front of the DMA engine but its goal is virtualization
/ protection. No stall and resume model. I can't use CPU virtual address
for DMA transfers. It needs to be a DMA address.

-- 
Sinan Kaya
Qualcomm Datacenter Technologies, Inc. as an affiliate of Qualcomm Technologies, Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project.
--
To unsubscribe from this list: send the line "unsubscribe dmaengine" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Kernel]     [Linux ARM (vger)]     [Linux ARM MSM]     [Linux Omap]     [Linux Arm]     [Linux Tegra]     [Fedora ARM]     [Linux for Samsung SOC]     [eCos]     [Linux PCI]     [Linux Fastboot]     [Gcc Help]     [Git]     [DCCP]     [IETF Announce]     [Security]     [Linux MIPS]     [Yosemite Campsites]

  Powered by Linux