From: Liu Shixin <liushixin2@xxxxxxxxxx> After patch ddbd89deb7d3 ("swiotlb: fix info leak with DMA_FROM_DEVICE"), swiotlb_bounce will be called in swiotlb_tbl_map_single unconditionally. This requires that the physical address must be valid, which is not always true on stable-4.19 or earlier version. On stable-4.19, swiotlb_alloc_buffer will call swiotlb_tbl_map_single with orig_addr equal to zero, which cause such a panic: Unable to handle kernel paging request at virtual address ffffb77a40000000 ... pc : __memcpy+0x100/0x180 lr : swiotlb_bounce+0x74/0x88 ... Call trace: __memcpy+0x100/0x180 swiotlb_tbl_map_single+0x2c8/0x338 swiotlb_alloc+0xb4/0x198 __dma_alloc+0x84/0x1d8 ... On stable-4.9 and stable-4.14, swiotlb_alloc_coherent wille call map_single with orig_addr equal to zero, which can cause same panic. Fix this by skipping swiotlb_bounce when orig_addr is zero. Fixes: ddbd89deb7d3 ("swiotlb: fix info leak with DMA_FROM_DEVICE") Signed-off-by: Liu Shixin <liushixin2@xxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- lib/swiotlb.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) --- a/lib/swiotlb.c +++ b/lib/swiotlb.c @@ -607,7 +607,8 @@ found: * unconditional bounce may prevent leaking swiotlb content (i.e. * kernel memory) to user-space. */ - swiotlb_bounce(orig_addr, tlb_addr, size, DMA_TO_DEVICE); + if (orig_addr) + swiotlb_bounce(orig_addr, tlb_addr, size, DMA_TO_DEVICE); return tlb_addr; } EXPORT_SYMBOL_GPL(swiotlb_tbl_map_single);