On Fri, May 19, 2023 at 04:56:53PM +0200, Thomas Gleixner wrote: > On Fri, May 19 2023 at 12:01, Uladzislau Rezki wrote: > > On Wed, May 17, 2023 at 06:32:25PM +0200, Thomas Gleixner wrote: > >> That made me look into this coalescing code. I understand why you want > >> to batch and coalesce and rather do a rare full tlb flush than sending > >> gazillions of IPIs. > >> > > Your issues has no connections with merging. But the place you looked > > was correct :) > > I'm not talking about merging. I'm talking about coalescing ranges. > > start = 0x95c8d000 end = 0x95c8e000 > > plus the VA from list which has > > start = 0xf08a1000 end = 0xf08a5000 > > which results in a flush range of: > > start = 0x95c8d000 end = 0xf08a5000 > No? > Correct. 0x95c8d000 is a min, 0xf08a5000 is a max. > > @@ -1739,15 +1739,14 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end) > > if (unlikely(list_empty(&local_purge_list))) > > goto out; > > > > - start = min(start, > > - list_first_entry(&local_purge_list, > > - struct vmap_area, list)->va_start); > > + /* OK. A per-cpu wants to flush an exact range. */ > > + if (start != ULONG_MAX) > > + flush_tlb_kernel_range(start, end); > > > > - end = max(end, > > - list_last_entry(&local_purge_list, > > - struct vmap_area, list)->va_end); > > + /* Flush per-VA. */ > > + list_for_each_entry(va, &local_purge_list, list) > > + flush_tlb_kernel_range(va->va_start, va->va_end); > > > > - flush_tlb_kernel_range(start, end); > > resched_threshold = lazy_max_pages() << 1; > > That's completely wrong, really. > Absolutely. That is why we do not flush a range per-VA ;-) I provided the data just to show what happens if we do it! A per-VA flushing works when a system is not capable of doing a full flush, so it has to do it page by page. In this scenario we should bypass ranges(not mapped) which are between VAs in a purge-list. -- Uladzislau Rezki