On Tue, Apr 17, 2018 at 11:39:44PM -0700, Sujeev Dias wrote: > > > On 04/17/2018 11:13 PM, Vinod Koul wrote: > >On Tue, Apr 17, 2018 at 09:12:43AM -0700, Dave Jiang wrote: > >> > >>On 04/13/2018 07:12 PM, Sujeev Dias wrote: > >>>Hi > >>> > >>>Can we please revert this patch because it breaks qcom dma-engine drivers and many consumers after > >>>we propagated to 4.14 kernel. > >Why, can you point out the upstream drivers which broke.. They are none, so > >we are not obliged to revert this, sorry! > > > >>>commit: c678fa66341c7b82a57cfed0ba3656162e970f99 > >>>dmaengine: remove DMA_SG as it is dead code in kernel > >>> > >>>I don't see any alternate methods we can use either. We cannot use standard dma_memcpy > >>>api's since argument for both src and destination dma_addr. Because dma mapping has to > >>>be done by dma controller (due to smmu/sid configurations), client must pass host DDR > >>>address as a cpu address not dma_addr. > >You can submit your driver along with revert, we can give it due > >consideration and suggest you fixes to get your driver supported. > Thanks Vinod, we're actually working on submitting series of drivers for > upstream consideration. I am trying to submit > the first dmaengine driver for review end of next week. As far as this > request, we're also looking into using device_prep_dma_memcpy, and consumer > using dma_chan->device->dev for their mapping. This way, if clients > recycles the buffer they can also use dma_sync apis > for recycling the mapping. > > Also as a minor optimization, we're planning to return > dma_async_tx_descriptor for prep_dma_memcpy only if EOT flags is set. > Otherwise > we will return NULL on success. This way we avoid memory allocation per > packet. Do you have any concern with that approach? if you do not have descriptor how will you submit it later on. NULL means failure.. which memory are you trying to avoid.. -- ~Vinod -- To unsubscribe from this list: send the line "unsubscribe dmaengine" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html