On Fri, May 10, 2024 at 04:03:04PM +0800, Yan Zhao wrote: > > > @@ -1358,10 +1377,17 @@ int iopt_area_fill_domain(struct iopt_area *area, struct iommu_domain *domain) > > > { > > > unsigned long done_end_index; > > > struct pfn_reader pfns; > > > + bool cache_flush_required; > > > int rc; > > > > > > lockdep_assert_held(&area->pages->mutex); > > > > > > + cache_flush_required = area->iopt->noncoherent_domain_cnt && > > > + !area->pages->cache_flush_required; > > > + > > > + if (cache_flush_required) > > > + area->pages->cache_flush_required = true; > > > + > > > rc = pfn_reader_first(&pfns, area->pages, iopt_area_index(area), > > > iopt_area_last_index(area)); > > > if (rc) > > > @@ -1369,6 +1395,9 @@ int iopt_area_fill_domain(struct iopt_area *area, struct iommu_domain *domain) > > > > > > while (!pfn_reader_done(&pfns)) { > > > done_end_index = pfns.batch_start_index; > > > + if (cache_flush_required) > > > + iopt_cache_flush_pfn_batch(&pfns.batch); > > > + > > > > This is a bit unfortunate, it means we are going to flush for every > > domain, even though it is not required. I don't see any easy way out > > of that :( > Yes. Do you think it's possible to add an op get_cache_coherency_enforced > to iommu_domain_ops? Do we need that? The hwpt already keeps track of that? the enforced could be copied into the area along side storage_domain Then I guess you could avoid flushing in the case the page came from the storage_domain... You'd want the storage_domain to preferentially point to any non-enforced domain. Is it worth it? How slow is this stuff? Jason