Re: [PATCH v2 2/2] treewide: Add the __GFP_PACKED flag to several non-DMA kmalloc() allocations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2022-11-01 17:19, Catalin Marinas wrote:
On Tue, Nov 01, 2022 at 11:59:19AM +0100, Christoph Hellwig wrote:
On Sun, Oct 30, 2022 at 04:43:21PM +0000, Catalin Marinas wrote:
The alternative is for dma_map_*() to (dynamically) verify at the point
of DMA setup rather than at the point of allocation (and bounce).

One issue with dma_map_* is that is can't report errors very well.
So I'd really prefer to avoid adding that kind of checking there.

By checking I meant bouncing on condition fail rather than reporting
errors.

We have a few options here:

a) Require that the dma_map_*() functions (including sg) get a size
    multiple of cache_line_size() or the static ARCH_DMA_MINALIGN.

I don't think that is going to work, there are just too many instances
all over the tree that pass smaller sizes.

b) Always bounce small sizes as they may have come from a small kmalloc
    cache. The alignment of both ptr and length is ignored. This assumes
    that in-struct alignments (crypto) use ARCH_DMA_MINALIGN and we can
    reduce ARCH_KMALLOC_MINALIGN independently.

I think the start must be very much aligned.  So what is the problem
with just checking the size?

Otherwise I think this is the way to go.  These tiny maps tend to be
various aux setup path thing and not performance critical (and caring
about performance for not DMA coherent devices isn't something we do
all that much anyway).

The check is a bit weird as it needs some awareness of the kmalloc
caches to avoid unnecessary bouncing. We can't just check for smaller
than 128 since 192 is another kmalloc cache that is not aligned to an
ARCH_DMA_MINALIGN of 128:

https://git.kernel.org/arm64/c/3508d0d4d5e458e43916bf4f61b29d2a6f15c2bb

I can probably make it a bit nicer by checking the alignment of
kmalloc_size_roundup(size) (this function doesn't seem to have any
callers).

The main downside of bouncing is mobile phones where those vendors are
in the habit of passing noswiotlb on the kernel command line or they
want a very small bounce buffer (hard to pick a universally small size).
I guess we can go ahead with this and, depending on how bad the problem
is, we can look at optimising swiotlb to use a kmem_cache (or aligned
kmalloc) as fall-back for bouncing.

(b) is what I attempted on one of my branches (until I got stuck on
bouncing for iommu). A slight overhead on dma_map_*() to check the
length and we may keep a check on the start alignment (not length
though).

When was that?  These days dma-iommu already has bouncing code for
the untrusted device case, so handling the bouncing there for unaligned
request on non-coherent devices shouldn't be all that horrible.

The bouncing currently is all or nothing with iommu_dma_map_sg(), unlike
dma_direct_map_sg() which ends up calling dma_direct_map_page() and we
can do the bouncing per element. So I was looking to untangle
iommu_dma_map_sg() in a similar way but postponed it as too complicated.

As a less than optimal solution, we can force bouncing for the whole
list if any of the sg elements is below the alignment size. Hopefully we
won't have many such mixed size cases.

Sounds like you may have got the wrong impression - the main difference with iommu_dma_map_sg_swiotlb() is that it avoids trying to do any of the clever concatenation stuff, and simply maps each segment individually with iommu_dma_map_page(), exactly like dma-direct; only segments which need bouncing actually get bounced.

The reason for tying it to the up-front dev_use_swiotlb() check is because sync and unmap have to do the right thing depending on how the list was initially mapped, and that's the easiest way to be consistent. However, since the P2P stuff landed what I'd like to do now is use the new scatterlist->dma_flags to indicate bounced segments in a similar fashion to bus-address segments, and so streamline the SWIOTLB bits into the main flow in the equivalent manner.

What sadly wouldn't work is just adding extra conditions to dev_use_swiotlb() to go down the existing bounce-if-necessary path for all non-coherent devices, since there are non-coherent users of dma-buf and v4l2 which (for better or worse) depend on the clever concatenation stuff happening.

Thanks,
Robin.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux