On 03/21/2016 10:04 AM, Boris Brezillon wrote: > Hi Franklin, > > On Thu, 10 Mar 2016 17:56:42 -0600 > Franklin S Cooper Jr <fcooper@xxxxxx> wrote: > >> Based on DMA documentation and testing using high memory buffer when >> doing dma transfers can lead to various issues including kernel >> panics. > > I guess it all comes from the vmalloced buffer case, which are not > guaranteed to be physically contiguous (one of the DMA requirement, > unless you have an iommu). > >> >> To workaround this simply use cpu copy. The amount of high memory >> buffers used are very uncommon so no noticeable performance hit should >> be seen. > > Hm, that's not necessarily true. UBI and UBIFS allocate their buffers > using vmalloc (vmalloced buffers fall in the high_memory region), and > those are likely to be dis-contiguous if you have NANDs with pages > 4k. > > I recently posted patches to ease sg_table creation from any kind of > virtual address [1][2]. Can you try them and let me know if it fixes > your problem? It looks like you won't be going forward with your patchset based on this thread [1]. I can probably reword the patch description to avoid implying that it is uncommon to run into high mem buffers. Also DMA with NAND prefetch suffers from a reduction of performance compared to CPU polling with prefetch. This is largely due to the significant over head required to read such a small amount of data at a time. The optimizations I've worked on all revolved around reducing the cycles spent before executing the DMA request. Trying to make a high memory buffer able to be used by the DMA adds significant amount of cycles and your better off just using the cpu for performance reasons. [1]https://lkml.org/lkml/2016/4/4/346 > > Thanks, > > Boris > > [1]https://lkml.org/lkml/2016/3/8/276 > [2]https://lkml.org/lkml/2016/3/8/277 > > -- To unsubscribe from this list: send the line "unsubscribe linux-omap" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html