Re: [PATCH bpf-next 0/5] bpf: BPF specific memory allocator.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jul 08, 2022 at 03:41:47PM +0200, Michal Hocko wrote:
> On Wed 06-07-22 11:05:25, Alexei Starovoitov wrote:
> > On Wed, Jul 06, 2022 at 06:55:36PM +0100, Matthew Wilcox wrote:
> [...]
> > > For example, I assume that a BPF program
> > > has a fairly tight limit on how much memory it can cause to be allocated.
> > > Right?
> > 
> > No. It's constrained by memcg limits only. It can allocate gigabytes.
>  
> I have very briefly had a look at the core allocator parts (please note
> that my understanding of BPF is really close to zero so I might be
> missing a lot of implicit stuff). So by constrained by memcg you mean
> __GFP_ACCOUNT done from the allocation context (irq_work). The complete
> gfp mask is GFP_ATOMIC | __GFP_NOMEMALLOC | __GFP_NOWARN | __GFP_ACCOUNT
> which means this allocation is not allowed to sleep and GFP_ATOMIC
> implies __GFP_HIGH to say that access to memory reserves is allowed.
> Memcg charging code interprets this that the hard limit can be breached
> under assumption that these are rare and will be compensated in some
> way. The bulk allocator implemented here, however, doesn't reflect that
> and continues allocating as it sees a success so the breach of the limit
> is only bound by the number of objects to be allocated. If those can be
> really large then this is a clear problem and __GFP_HIGH usage is not
> really appropriate.

That was a copy paste from the networking stack. See kmalloc_reserve().
Not sure whether it's a bug there or not.
In a separate thread we've agreed to convert all of bpf allocations
to GFP_NOWAIT. For this patch set I've already fixed it in my branch.

> Also, I do not see any tracking of the overall memory sitting in these
> pools and I think this would be really appropriate. As there doesn't
> seem to be any reclaim mechanism implemented this can hide quite some
> unreachable memory.
> 
> Finally it is not really clear to what kind of entity is the life time
> of these caches bound to. Let's say the system goes OOM, is any process
> responsible for it and a clean up would be done if it gets killed?

We've been asking these questions for years and have been trying to
come up with a solution.
bpf progs are not analogous to user space processes. 
There are bpf progs that function completely without user space component.
bpf progs are pretty close to be full featured kernel modules with
the difference that bpf progs are safe, portable and users have
full visibility into them (source code, line info, type info, etc)
They are not binary blobs unlike kernel modules.
But from OOM perspective they're pretty much like .ko-s.
Which kernel module would you force unload when system is OOMing ?
Force unloading ko-s will likely crash the system.
Force unloading bpf progs maybe equally bad. The system won't crash,
but it may be a sorrow state. The bpf could have been doing security
enforcement or network firewall or providing key insights to critical
user space components like systemd or health check daemon.
We've been discussing ideas on how to rank and auto cleanup
the system state when progs have to be unloaded. Some sort of
destructor mechanism. Fingers crossed we will have it eventually.
bpf infra keeps track of everything, of course.
Technically we can detach, unpin and unload everything and all memory
will be returned back to the system.
Anyhow not a new problem. Orthogonal to this patch set.
bpf progs have been doing memory allocation from day one. 8 years ago.
This patch set is trying to make it 100% safe.
Currently it's 99% safe.



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux