Re: memory access op ideas

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 4/22/22 2:03 PM, Walker, Benjamin wrote:
> On 4/22/2022 7:50 AM, Jens Axboe wrote:
>> On 4/13/22 4:33 AM, Avi Kivity wrote:
>>> Unfortunately, only ideas, no patches. But at least the first seems very easy.
>>>
>>>
>>> - IORING_OP_MEMCPY_IMMEDIATE - copy some payload included in the op
>>> itself (1-8 bytes) to a user memory location specified by the op.
>>>
>>>
>>> Linked to another op, this can generate an in-memory notification
>>> useful for busy-waiters or the UMWAIT instruction
>>>
>>> This would be useful for Seastar, which looks at a timer-managed
>>> memory location to check when to break computation loops.
>>
>> This one would indeed be trivial to do. If we limit the max size
>> supported to eg 8 bytes like suggested, then it could be in the sqe
>> itself and just copied to the user address specified.
>>
>> Eg have sqe->len be the length (1..8 bytes), sqe->addr the destination
>> address, and sqe->off the data to copy.
>>
>> If you'll commit to testing this, I can hack it up pretty quickly...
>>
>>> - IORING_OP_MEMCPY - asynchronously copy memory
>>>
>>>
>>> Some CPUs include a DMA engine, and io_uring is a perfect interface to
>>> exercise it. It may be difficult to find space for two iovecs though.
>>
>> I've considered this one in the past too, and it is indeed an ideal fit
>> in terms of API. Outside of the DMA engines, it can also be used to
>> offload memcpy to a GPU, for example.
>>
>> The io_uring side would not be hard to wire up, basically just have the
>> sqe specfy source, destination, length. Add some well defined flags
>> depending on what the copy engine offers, for example.
>>
>> But probably some work required here in exposing an API and testing
>> etc...
>>
> 
> I'm about to send a set of patches to associate an io_uring with a
> dmaengine channel to this list. I'm not necessarily thinking of using
> it to directly drive the DMA engine itself (although we could, and
> there's some nice advantages to that), but rather as a way to offload
> data copies/transforms on existing io_uring operations. My primary
> focus has been the copy between kernel and user space when receiving
> from a socket.

Interesting - I think both uses cases are actually valid, offloading a
memcpy or using the engine to copy the data of an existing operation.

> Upcoming DMA engines also support SVA, allowing them to copy from
> kernel to user without page pinning. We've got patches for full SVA
> enabling in dmaengine prepared, such that each io_uring can allocate a
> PASID describing the user+kernel address space for the current
> context, allocate a channel via dmaengine and assign it the PASID, and
> then do DMA between kernel/user with new dmaengine APIs without any
> page pinning.
> 
> As preparation, I have submitted a series to dmaengine that allows for
> polling and out-of-order completions. See
> https://lore.kernel.org/dmaengine/20220201203813.3951461-1-benjamin.walker@xxxxxxxxx/T/#u.
> This is a necessary first step.
> 
> I'll get the patches out ASAP as an RFC. I'm sure my approach was
> entirely wrong, but you'll get the idea.

Please do, this sounds exciting! The whole point of an RFC is to get
some feedback on initial design before it potentially goes too far down
the wrong path.

-- 
Jens Axboe




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux