Hi, On 2/12/2023 12:34 AM, Alexei Starovoitov wrote: > On Sat, Feb 11, 2023 at 8:33 AM Alexei Starovoitov > <alexei.starovoitov@xxxxxxxxx> wrote: >> On Fri, Feb 10, 2023 at 5:10 PM Hou Tao <houtao@xxxxxxxxxxxxxxx> wrote: >>>>> Hou, are you plannning to resubmit this change? I also hit this while testing my >>>>> changes on bpf-next. >>>> Are you talking about the whole patch set or just GFP_ZERO in mem_alloc? >>>> The former will take a long time to settle. >>>> The latter is trivial. >>>> To unblock yourself just add GFP_ZERO in an extra patch? >>> Sorry for the long delay. Just find find out time to do some tests to compare >>> the performance of bzero and ctor. After it is done, will resubmit on next week. >> I still don't like ctor as a concept. In general the callbacks in the critical >> path are guaranteed to be slow due to retpoline overhead. >> Please send a patch to add GFP_ZERO. >> >> Also I realized that we can make the BPF_REUSE_AFTER_RCU_GP flag usable >> without risking OOM by only waiting for normal rcu GP and not rcu_tasks_trace. >> This approach will work for inner nodes of qptrie, since bpf progs >> never see pointers to them. It will work for local storage >> converted to bpf_mem_alloc too. It wouldn't need to use its own call_rcu. >> It's also safe without uaf caveat in sleepable progs and sleepable progs > I meant 'safe with uaf caveat'. > Safe because we wait for rcu_task_trace later before returning to kernel memory. > >> can use explicit bpf_rcu_read_lock() when they want to avoid uaf. >> So please respin the set with rcu gp only and that new flag. Beside BPF_REUSE_AFTER_RCU_GP, is BPF_FREE_AFTER_RCU_GP a feasible solution ? Its downside is that it will enforce sleep-able program to use bpf_rcu_read_{lock,unlock}() to access these returned pointers ? > .