Re: [PATCH v9 net-next 15/15] net: Move per-CPU flush-lists to bpf_net_context on PREEMPT_RT.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2024-06-21 19:05:58 [-0700], Jakub Kicinski wrote:
> On Thu, 20 Jun 2024 15:22:05 +0200 Sebastian Andrzej Siewior wrote:
> >  void __cpu_map_flush(void)
> >  {
> > -	struct list_head *flush_list = this_cpu_ptr(&cpu_map_flush_list);
> > +	struct list_head *flush_list = bpf_net_ctx_get_cpu_map_flush_list();
> >  	struct xdp_bulk_queue *bq, *tmp;
> >  
> >  	list_for_each_entry_safe(bq, tmp, flush_list, flush_node) {
> 
> Most of the time we'll init the flush list just to walk its (empty)
> self. It feels really tempting to check the init flag inside
> xdp_do_flush() already. Since the various sub-flush handles may not get
> inlined - we could save ourselves not only the pointless init, but
> also the function calls. So the code would potentially be faster than
> before the changes?

Yeah. We have this lazy init now and the debug check forces the init. So
not only xdp_do_check_flushed() will initialize all three lists but also
xdp_do_flush() if only one was used by the caller.
Sure this can be optimized based on the init flag of the lists.

> Can be a follow up, obviously.
Will add it to my list.

Sebastian




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux