On Thu, May 25, 2023 at 05:12:37PM +0100, Robin Murphy wrote: > On 24/05/2023 6:19 pm, Catalin Marinas wrote: > > With the DMA bouncing of unaligned kmalloc() buffers now in place, > > enable it for arm64 to allow the kmalloc-{8,16,32,48,96} caches. In > > addition, always create the swiotlb buffer even when the end of RAM is > > within the 32-bit physical address range (the swiotlb buffer can still > > be disabled on the kernel command line). > > > > Signed-off-by: Catalin Marinas <catalin.marinas@xxxxxxx> > > Cc: Will Deacon <will@xxxxxxxxxx> > > --- > > arch/arm64/Kconfig | 1 + > > arch/arm64/mm/init.c | 7 ++++++- > > 2 files changed, 7 insertions(+), 1 deletion(-) > > > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig > > index b1201d25a8a4..af42871431c0 100644 > > --- a/arch/arm64/Kconfig > > +++ b/arch/arm64/Kconfig > > @@ -120,6 +120,7 @@ config ARM64 > > select CRC32 > > select DCACHE_WORD_ACCESS > > select DYNAMIC_FTRACE if FUNCTION_TRACER > > + select DMA_BOUNCE_UNALIGNED_KMALLOC > > We may want to give the embedded folks an easier way of turning this off, > since IIRC one of the reasons for the existing automatic behaviour was > people not wanting to have to depend on the command line. Things with 256MB > or so of RAM seem unlikely to get enough memory efficiency back from the > smaller kmem caches to pay off the SWIOTLB allocation :) I thought about this initially and that's why I had two options (ARCH_WANT_* and this one). But we already select SWIOTLB on arm64, so for the embedded folk the only option is swiotlb=noforce on the cmdline which, in turn, limits the kmalloc caches to kmalloc-64 (or whatever the cache line size is) irrespective of this new select. -- Catalin