Re: Excessive TLB flush ranges

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, May 15 2023 at 17:59, Russell King wrote:
> On Mon, May 15, 2023 at 06:43:40PM +0200, Thomas Gleixner wrote:
>> bpf_prog_free_deferred()
>>   vfree()
>>     _vm_unmap_aliases()
>>        collect_per_cpu_vmap_blocks: start:0x95c8d000 end:0x95c8e000 size:0x1000 
>>        __purge_vmap_area_lazy(start:0x95c8d000, end:0x95c8e000)
>> 
>>          va_start:0xf08a1000 va_end:0xf08a5000 size:0x00004000 gap:0x5ac13000 (371731 pages)
>>          va_start:0xf08a5000 va_end:0xf08a9000 size:0x00004000 gap:0x00000000 (     0 pages)
>>          va_start:0xf08a9000 va_end:0xf08ad000 size:0x00004000 gap:0x00000000 (     0 pages)
>>          va_start:0xf08ad000 va_end:0xf08b1000 size:0x00004000 gap:0x00000000 (     0 pages)
>>          va_start:0xf08b3000 va_end:0xf08b7000 size:0x00004000 gap:0x00002000 (     2 pages)
>>          va_start:0xf08b7000 va_end:0xf08bb000 size:0x00004000 gap:0x00000000 (     0 pages)
>>          va_start:0xf08bb000 va_end:0xf08bf000 size:0x00004000 gap:0x00000000 (     0 pages)
>>          va_start:0xf0a15000 va_end:0xf0a17000 size:0x00002000 gap:0x00156000 (   342 pages)
>> 
>>       flush_tlb_kernel_range(start:0x95c8d000, end:0xf0a17000)
>> 
>>          Does 372106 flush operations where only 31 are useful
>
> So, you asked the architecture to flush a large range, and are then
> surprised if it takes a long time. There is no way to know how many
> of those are useful.

I did not ask for that. That's the merge ranges logic in
__purge_vmap_area_lazy() which decides that the one page at 0x95c8d000
should build a flush range with the rest. I'm just the messenger :)

> Now, while using the sledge hammer of flushing all TLB entries may
> sound like a good answer, if we're only evicting 31 entries, the
> other entries are probably useful to have, no?

That's what I was asking already in the part you removed from the reply,
no?

> I think that you'd only run into this if you had a huge BPF
> program and you tore it down, no?

There was no huge BPF program. Some default seccomp muck.

I have another trace which shows that seccomp creates 10 BPF programs
for one process where each allocates 8K vmalloc memory in the
0xf0a.... address range.

On teardown this is even more horrible than the above. Every allocation
is deleted separately, i.e. 8k at a time and the pattern is always the
same. One extra page in the 0xca6..... address range is handed in via
_vm_unmap_aliases(), which expands the range insanely.

So this means ~1.5M flush operations to flush a total of 30 TLB entries.

That reproduces in a VM easily and has exactly the same behaviour:

       Extra page[s] via         The actual allocation
       _vm_unmap_aliases() Pages                     Pages Flush start       Pages
alloc:                           ffffc9000058e000      2
free : ffff888144751000      1   ffffc9000058e000      2   ffff888144751000  17312759359

alloc:                           ffffc90000595000      2
free : ffff8881424f0000      1   ffffc90000595000      2   ffff8881424f0000  17312768167

.....

seccomp seems to install 29 BPF programs for that process. So on exit()
this results in 29 full TLB flushes on x86, where each of them is used
to flush exactly three TLB entries.

The actual two page allocation (ffffc9...) is in the vmalloc space, the
extra page (ffff88...) is in the direct mapping.

This is a plain debian install with the a 6.4-rc1 kernel. The reproducer
is: # systemctl start logrotate

Thanks,

        tglx




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux