On 23/08/2023 8:18 pm, Matthew Wilcox (Oracle) wrote:
Russell and Marek pointed out some assumptions I was making about how sg
lists work; eg that they are limited to 2GB and that the initial offset
lies within the first page (or at least within the first folio that a
page belongs to). While I think those assumptions are true, it's not
too hard to write a version which does not have those assumptions and
also calculates folio_size() only once per loop iteration.
FWIW, sg->offset > PAGE_SIZE has certainly been known to happen for
quasi-legitimate reasons. Last time it came up[1], I think the
conclusion in that case was that the crypto scatterwalk code wasn't
unreasonably wrong, though could perhaps do better, but it was also
straightforward enough to be robust against it within the DMA API so we
should just do that anyway (commit 29a90b708938).
As for >2GB segments, we've certainly seen cases of users mapping
absurdly large buffers and overflowing dma_length[2], so I would imagine
it's only the improbability of allocating that much
physically-contiguous memory which keeps individual segment lengths from
getting up to UINT_MAX ;)
Cheers,
Robin.
[1]
https://lore.kernel.org/linux-iommu/be3bb850-a9f2-61fa-e378-eb44489256e0@xxxxxxxxxxx/
[2]
https://lore.kernel.org/linux-iommu/fbdbb8c0e550ae559ea3eedc1fea084c0111f202.1564418681.git.robin.murphy@xxxxxxx/
---
arch/arm/mm/dma-mapping.c | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index 0474840224d9..5409225b4abc 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -695,7 +695,6 @@ static void __dma_page_cpu_to_dev(struct page *page, unsigned long off,
static void __dma_page_dev_to_cpu(struct page *page, unsigned long off,
size_t size, enum dma_data_direction dir)
{
- struct folio *folio = page_folio(page);
phys_addr_t paddr = page_to_phys(page) + off;
/* FIXME: non-speculating: not required */
@@ -710,18 +709,19 @@ static void __dma_page_dev_to_cpu(struct page *page, unsigned long off,
* Mark the D-cache clean for these pages to avoid extra flushing.
*/
if (dir != DMA_TO_DEVICE && size >= PAGE_SIZE) {
- ssize_t left = size;
+ struct folio *folio = pfn_folio(paddr / PAGE_SIZE);
size_t offset = offset_in_folio(folio, paddr);
- if (offset) {
- left -= folio_size(folio) - offset;
- folio = folio_next(folio);
- }
+ for (;;) {
+ size_t sz = folio_size(folio) - offset;
- while (left >= (ssize_t)folio_size(folio)) {
- left -= folio_size(folio);
- set_bit(PG_dcache_clean, &folio->flags);
- if (!left)
+ if (size < sz)
+ break;
+ if (!offset)
+ set_bit(PG_dcache_clean, &folio->flags);
+ offset = 0;
+ size -= sz;
+ if (!size)
break;
folio = folio_next(folio);
}