On 9/15/06, Olof Johansson <olof@xxxxxxxxx> wrote:
On Fri, 15 Sep 2006 11:38:17 -0500 Olof Johansson <olof@xxxxxxxxx> wrote: > On Mon, 11 Sep 2006 19:44:16 -0400 Jeff Garzik <jeff@xxxxxxxxxx> wrote: > > Are we really going to add a set of hooks for each DMA engine whizbang > > feature? > > > > That will get ugly when DMA engines support memcpy, xor, crc32, sha1, > > aes, and a dozen other transforms. > > > Yes, it will be unmaintainable. We need some sort of multiplexing with > per-function registrations. > > Here's a first cut at it, just very quick. It could be improved further > but it shows that we could exorcise most of the hardcoded things pretty > easily. Ok, that was obviously a naive and not so nice first attempt, but I figured it was worth it to show how it can be done. This is a little more proper: Specify at client registration time what the function the client will use is, and make the channel use it. This way most of the error checking per call can be removed too. Chris/Dan: Please consider picking this up as a base for the added functionality and cleanups.
Thanks for this Olof it has sparked some ideas about how to redo support for multiple operations.
Clean up dmaengine a bit. Make the client registration specify which channel functions ("type") the client will use. Also, make devices register which functions they will provide. Also exorcise most of the memcpy-specific references from the generic dma engine code. There's still some left in the iov stuff.
I think we should keep the operation type in the function name but drop all the [buf|pg|dma]_to_[buf|pg|dma] permutations. The buffer type can be handled generically across all operation types. Something like the following for a pg_to_buf memcpy. struct dma_async_op_memcpy *op; struct page *pg; void *buf; size_t len; dma_async_op_init_src_pg(op, pg); dma_async_op_init_dest_buf(op, buf); dma_async_memcpy(chan, op, len); -Dan - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html