On Mon, Aug 22, 2022 at 7:57 PM Hou Tao <houtao@xxxxxxxxxxxxxxx> wrote: > > Hi, > > On 8/23/2022 9:29 AM, Alexei Starovoitov wrote: > > On Mon, Aug 22, 2022 at 5:56 PM Hao Luo <haoluo@xxxxxxxxxx> wrote: > >> > SNIP > >> Tao, thanks very much for the test. I played it a bit and I can > >> confirm that map_update failures are seen under CONFIG_PREEMPT. The > >> failures are not present under CONFIG_PREEMPT_NONE or > >> CONFIG_PREEMPT_VOLUNTARY. I experimented with a few alternatives I was > >> thinking of and they didn't work. It looks like Hou Tao's idea, > >> promoting migrate_disable to preempt_disable, is probably the best we > >> can do for the non-RT case. So > > preempt_disable is also faster than migrate_disable, > > so patch 1 will not only fix this issue, but will improve performance. > > > > Patch 2 is too hacky though. > > I think it's better to wait until my bpf_mem_alloc patches land. > > RT case won't be special anymore. We will be able to remove > > htab_use_raw_lock() helper and unconditionally use raw_spin_lock. > > With bpf_mem_alloc there is no inline memory allocation anymore. > OK. Look forwards to the landing of BPF specific memory allocator. > > > > So please address Hao's comments, add a test and > > resubmit patches 1 and 3. > > Also please use [PATCH bpf-next] in the subject to help BPF CI > > and patchwork scripts. > Will do. And to bpf-next instead of bpf ? bpf-next is almost always prefered for fixes for corner cases that have been around for some time. bpf tree is for security and high pri fixes. bpf-next gives time for fixes to prove themselves.