Hi, On Wed, 2019-11-27 at 15:40 +0100, Christoph Hellwig wrote: > Devices that are forced to DMA through unencrypted bounce buffers > need to be treated as if they are addressing limited. > > Signed-off-by: Christoph Hellwig <hch@xxxxxx> > --- > kernel/dma/mapping.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c > index 1dbe6d725962..f6c35b53d996 100644 > --- a/kernel/dma/mapping.c > +++ b/kernel/dma/mapping.c > @@ -416,6 +416,8 @@ EXPORT_SYMBOL_GPL(dma_get_merge_boundary); > */ > bool dma_addressing_limited(struct device *dev) > { > + if (force_dma_unencrypted(dev)) > + return true; > return min_not_zero(dma_get_mask(dev), dev->bus_dma_limit) < > dma_get_required_mask(dev); > } Any chance to have the case (swiotlb_force == SWIOTLB_FORCE) also included? Otherwise for the series Reviewed-by: Thomas Hellström <thellstrom@xxxxxxxxxx>