Re: Excessive TLB flush ranges

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, May 16 2023 at 09:48, Russell King wrote:
> On Tue, May 16, 2023 at 10:44:07AM +0200, Thomas Gleixner wrote:
>> On Tue, May 16 2023 at 09:19, Russell King wrote:
>> > On Tue, May 16, 2023 at 08:37:18AM +0200, Thomas Gleixner wrote:
>> >> void flush_tlb_kernel_vas(struct list_head *list, unsigned int num_entries):
>> >> 
>> >> So that an architecture can decide whether it's worth to do walk the
>> >> entries or whether it resorts to a flush all.
>> >
>> > Is "num_entries" what an arch would want to use? How would it use that?
>> > It doesn't tell an arch whether there is a large range of many list
>> > entries, or a single entry covering a large range.
>> 
>> Does it matter?
>> 
>> The total number of entries to flush is what accumulates and at some
>> architecture specific threshold that becomes more expensive than a full
>> flush, independent of the range of the individual list entries, no?
>
> It depends what you mean by "num_entries" - is that the number of
> pages to be flushed in total in the range?

Yes. The sum of all va list ranges.

> If so, what does a valid "start" and "end" range passed to
> __purge_vmap_area_lazy() mean for num_entries - does that go to
> (end - start) / PAGE_SIZE, or would it still be restricted to the
> sum of that per list entry? If so, what's the point of passing in
> "start" and "end" to this function?

_vm_unmap_aliases() collects dirty ranges from per cpu vmap_block_queue
(what ever that is) and hands a start..end range to
__purge_vmap_area_lazy().

As I pointed out already, this can also end up being an excessive range
because there is no guarantee that those individual collected ranges are
consecutive. Though I have no idea how to cure that right now.

AFAICT this was done to spare flush IPIs, but the mm folks should be
able to explain that properly.

In the problematic case at hand and what I've seen from tracing so far,
e.g. for module unload this looks always the same. A small range of
direct map collected and then a bunch of vmap entries from the purge
list. But have not yet tried hard to figure out whether that direct map
collection is ever going to cover a larger range for no reason.

Thanks,

        tglx




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux