Hi Bingbu, On Fri, Aug 16, 2024 at 11:31:21AM +0800, bingbu.cao@xxxxxxxxx wrote: > From: Bingbu Cao <bingbu.cao@xxxxxxxxx> > > ipu6_mmu_map() and ipu6_mmu_unmap() operated on a per-page basis, > leading to frequent calls to spin_locks/unlocks and > clflush_cache_range for each page. This will cause inefficiencies, > especially when handling large dma-bufs with hundreds of pages. > > This change enhances ipu6_mmu_map()/ipu6_mmu_unmap() with batching > process multiple contiguous pages. This significantly reduces calls > for spin_lock/unlock and clflush_cache_range() and improve the > performance. > > Signed-off-by: Jianhui Dai <jianhui.j.dai@xxxxxxxxx> > Signed-off-by: Bingbu Cao <bingbu.cao@xxxxxxxxx> Thanks for the patch. Could you split this into three patches (at least) to make it more reviewable: - Move l2_unmap() up to its new location. - Add unmapping optimisation. - Add mapping optimisation. -- Kind regards, Sakari Ailus