Link to whole series: https://lore.kernel.org/lkml/20231017202505.340906-1-rick.p.edgecombe@xxxxxxxxx/ On Wed, 2023-10-18 at 08:24 +0200, Christoph Hellwig wrote: > On Tue, Oct 17, 2023 at 01:25:01PM -0700, Rick Edgecombe wrote: > > struct cma; > > > > @@ -165,7 +166,7 @@ static inline struct page > > *dma_alloc_contiguous(struct device *dev, size_t size, > > static inline void dma_free_contiguous(struct device *dev, struct > > page *page, > > size_t size) > > { > > - __free_pages(page, get_order(size)); > > + free_decrypted_pages((unsigned long)page_address(page), > > get_order(size)); > > CMA can be highmem, so this won't work totally independent of what > free_decrypted_pages actually does. Also please avoid the overly > long line. Argh, yes this is broken for highmem. Thanks for pointing it out. For x86, we don't need to worry about doing set_memory_XXcrypted() with highmem. Checking the Kconfig logic around the other set_memory_XXcrypted() implementations: s390 - Doesn't support HIGHMEM powerpc - Doesn't support set_memory_XXcrypted() and HIGHMEM together So that would mean set_memory_encrypted() is not needed on the HIGHMEM configs (i.e. it's ok if there is no virtual mapping at free-time, because it can skip the conversion work). So free_decrypted_pages() could be changed to not disturb the HIGHMEM configs, like this: static inline void free_decrypted_pages(struct page *page, int order) { void *addr = page_address(page); int ret = 0; if (addr) ret = set_memory_encrypted(addr, 1 << order); if (ret) { WARN_ONCE(1, "Failed...\n"); return; } __free_pages(page, get_order(size)); } Or we could just fix all the callers to open code the right logic. The free_decrypted_pages() helper is not really saving code across the series, and only serving to help callers avoid re-introducing the issue. But I'm sort of worried it will be easy to do just that. Hmm...