On 2017/2/22 3:29, Laura Abbott wrote: > On 02/20/2017 10:05 PM, Chen Feng wrote: >> Hi Laura, >> >> When we enable kernel v4.4 or newer version on our platform, we meet the issue >> of flushing cache without reference device. It seems that this patch set is >> a solution. I'm curious the progress of the discussion. Do you have any plan >> to fix it in v4.4 and newer kernel verison? >> > > No, I've abandoned this approach based on feedback. The APIs had too much > potential for incorrect usage. I'm ripping out the implicit caching in Ion > and switching it to a model where there should always be a device available. > > What's your use case where you don't have a device structure? > Userspace use ioctl to flush cache for device. ion_sync_for_device dma_sync_sg_for_device(NULL, buffer->sg_table->sgl, buffer->sg_table->nents, DMA_BIDIRECTIONAL); And sys-heap when allocate a zero buffer flush zero data to ddr. alloc_buffer_page ion_pages_sync_for_device(NULL, page, PAGE_SIZE << order, DMA_BIDIRECTIONAL); > Thanks, > Laura > >> On 2016/9/14 2:41, Laura Abbott wrote: >>> On 09/13/2016 08:14 AM, Will Deacon wrote: >>>> On Tue, Sep 13, 2016 at 08:02:20AM -0700, Laura Abbott wrote: >>>>> On 09/13/2016 02:19 AM, Will Deacon wrote: >>>>>> On Mon, Sep 12, 2016 at 02:32:56PM -0700, Laura Abbott wrote: >>>>>>> >>>>>>> arm64 may need to guarantee the caches are synced. Implement versions of >>>>>>> the kernel_force_cache API to allow this. >>>>>>> >>>>>>> Signed-off-by: Laura Abbott <labbott@xxxxxxxxxx> >>>>>>> --- >>>>>>> v3: Switch to calling cache operations directly instead of relying on >>>>>>> DMA mapping. >>>>>>> --- >>>>>>> arch/arm64/include/asm/cacheflush.h | 8 ++++++++ >>>>>>> arch/arm64/mm/cache.S | 24 ++++++++++++++++++++---- >>>>>>> arch/arm64/mm/flush.c | 11 +++++++++++ >>>>>>> 3 files changed, 39 insertions(+), 4 deletions(-) >>>>>> >>>>>> I'm really hesitant to expose these cache routines as an API solely to >>>>>> support a driver sitting in staging/. I appreciate that there's a chicken >>>>>> and egg problem here, but we *really* don't want people using these routines >>>>>> in preference to the DMA API, and I fear that we'll simply grow a bunch >>>>>> more users of these things if we promote it as an API like you're proposing. >>>>>> >>>>>> Can the code not be contained under staging/, as part of ion? >>>>>> >>>>> >>>>> I proposed that in V1 and it was suggested I make it a proper API >>>>> >>>>> http://www.mail-archive.com/driverdev-devel@xxxxxxxxxxxxxxxxxxxxxx/msg47654.html >>>>> http://www.mail-archive.com/driverdev-devel@xxxxxxxxxxxxxxxxxxxxxx/msg47672.html >>>> >>>> :/ then I guess we're in disagreement. If ion really needs this stuff >>>> (which I don't fully grok), perhaps we should be exposing something at >>>> a higher level from the architecture, so it really can't be used for >>>> anything other than ion. >>> >>> I talked/complained about this at a past plumbers. The gist is that Ion >>> ends up acting as a fake DMA layer for clients. It doesn't match nicely >>> because clients can allocate both coherent and non-coherent memory. >>> Trying to use dma_map doesn't work because a) a device for coherency isn't >>> known at allocation time b) it kills performance. Part of the motivation >>> for taking this approach is to avoid the need to rework the existing >>> Android userspace and keep the existing behavior, as terrible as it >>> is. Having Ion out of staging and not actually usable isn't helpful. >>> >>> I'll give this all some more thought and hopefully have one or two more >>> proposals before Connect/Plumbers. >>> >>>> >>>> Will >>>> >>> >>> Thanks, >>> Laura >>> _______________________________________________ >>> Linaro-mm-sig mailing list >>> Linaro-mm-sig@xxxxxxxxxxxxxxxx >>> https://lists.linaro.org/mailman/listinfo/linaro-mm-sig >> >> >> _______________________________________________ >> linux-arm-kernel mailing list >> linux-arm-kernel@xxxxxxxxxxxxxxxxxxx >> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel >> > > > . > _______________________________________________ devel mailing list devel@xxxxxxxxxxxxxxxxxxxxxx http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel