On Thu, May 25, 2023 at 01:31:38PM +0100, Jonathan Cameron wrote: > On Wed, 24 May 2023 18:18:49 +0100 > Catalin Marinas <catalin.marinas@xxxxxxx> wrote: > > Another version of the series reducing the kmalloc() minimum alignment > > on arm64 to 8 (from 128). Other architectures can easily opt in by > > defining ARCH_KMALLOC_MINALIGN as 8 and selecting > > DMA_BOUNCE_UNALIGNED_KMALLOC. > > > > The first 10 patches decouple ARCH_KMALLOC_MINALIGN from > > ARCH_DMA_MINALIGN and, for arm64, limit the kmalloc() caches to those > > aligned to the run-time probed cache_line_size(). On arm64 we gain the > > kmalloc-{64,192} caches. > > > > The subsequent patches (11 to 15) further reduce the kmalloc() caches to > > kmalloc-{8,16,32,96} if the default swiotlb is present by bouncing small > > buffers in the DMA API. > > I think IIO_DMA_MINALIGN needs to switch to ARCH_DMA_MINALIGN as well. > > It's used to force static alignement of buffers with larger structures, > to make them suitable for non coherent DMA, similar to your other cases. Ah, I forgot that you introduced that macro. However, at a quick grep, I don't think this forced alignment always works as intended (irrespective of this series). Let's take an example: struct ltc2496_driverdata { /* this must be the first member */ struct ltc2497core_driverdata common_ddata; struct spi_device *spi; /* * DMA (thus cache coherency maintenance) may require the * transfer buffers to live in their own cache lines. */ unsigned char rxbuf[3] __aligned(IIO_DMA_MINALIGN); unsigned char txbuf[3]; }; The rxbuf is aligned to IIO_DMA_MINALIGN, the structure and its size as well but txbuf is at an offset of 3 bytes from the aligned IIO_DMA_MINALIGN. So basically any cache maintenance on rxbuf would corrupt txbuf. You need rxbuf to be the only resident of a cache line, therefore the next member needs such alignment as well. With this series and SWIOTLB enabled, however, if you try to transfer 3 bytes, they will be bounced, so the missing alignment won't matter much. -- Catalin