On 18.10.2022 10:24, Christoph Hellwig wrote: > @@ -127,19 +128,22 @@ static inline unsigned int i915_sg_dma_sizes(struct scatterlist *sg) > return page_sizes; > } > > -static inline unsigned int i915_sg_segment_size(void) > +static inline unsigned int i915_sg_segment_size(struct device *dev) > { > - unsigned int size = swiotlb_max_segment(); > - > - if (size == 0) > - size = UINT_MAX; > - > - size = rounddown(size, PAGE_SIZE); > - /* swiotlb_max_segment_size can return 1 byte when it means one page. */ > - if (size < PAGE_SIZE) > - size = PAGE_SIZE; > - > - return size; > + size_t max = min_t(size_t, UINT_MAX, dma_max_mapping_size(dev)); > + > + /* > + * Xen on x86 can reshuffle pages under us. The DMA API takes > + * care of that both in dma_alloc_* (by calling into the hypervisor > + * to make the pages contigous) and in dma_map_* (by bounce buffering). > + * But i915 abuses ignores the coherency aspects of the DMA API and > + * thus can't cope with bounce buffering actually happening, so add > + * a hack here to force small allocations and mapping when running on > + * Xen. (good luck with TDX, btw --hch) > + */ > + if (IS_ENABLED(CONFIG_X86) && xen_domain()) > + max = PAGE_SIZE; > + return round_down(max, PAGE_SIZE); > } Shouldn't this then be xen_pv_domain() that you use here, and - if you really want IS_ENABLED() in addition - CONFIG_XEN_PV? Jan