On Mon, Jan 23, 2023 at 2:03 PM Yaniv Agman <yanivagman@xxxxxxxxx> wrote: > > בתאריך יום ב׳, 23 בינו׳ 2023 ב-23:25 מאת Jakub Sitnicki > <jakub@xxxxxxxxxxxxxx>: > > > > On Mon, Jan 23, 2023 at 11:01 PM +02, Yaniv Agman wrote: > > > בתאריך יום ב׳, 23 בינו׳ 2023 ב-22:06 מאת Martin KaFai Lau > > > <martin.lau@xxxxxxxxx>: > > >> > > >> On 1/23/23 9:32 AM, Yaniv Agman wrote: > > >> >>> interrupted the first one. But even then, I will need to find a way to > > >> >>> know if my program currently interrupts the run of another program - > > >> >>> is there a way to do that? > > >> May be a percpu atomic counter to see if the bpf prog has been re-entered on the > > >> same cpu. > > > > > > Not sure I understand how this will help. If I want to save local > > > program data on a percpu map and I see that the counter is bigger then > > > zero, should I ignore the event? > > > > map_update w/ BPF_F_LOCK disables preemption, if you're after updating > > an entry atomically. But it can't be used with PERCPU maps today. > > Perhaps that's needed now too. > > Yep. I think what is needed here is the ability to disable preemption > from the bpf program - maybe even adding a helper for that? I'm not sure what the issue is here. Old preempt_disable() doesn't mean that one bpf program won't ever be interrupted by another bpf prog. Like networking bpf prog in old preempt_disable can call into something where there is a kprobe and another tracing bpf prog will be called. The same can happen after we switched to migrate_disable.