From: Paolo Abeni <pabeni@xxxxxxxxxx> Date: Thu, 9 Jan 2025 14:16:22 +0100 > On 1/7/25 4:29 PM, Alexander Lobakin wrote: >> Add a function to get an array of skbs from the NAPI percpu cache. >> It's supposed to be a drop-in replacement for >> kmem_cache_alloc_bulk(skbuff_head_cache, GFP_ATOMIC) and >> xdp_alloc_skb_bulk(GFP_ATOMIC). The difference (apart from the >> requirement to call it only from the BH) is that it tries to use >> as many NAPI cache entries for skbs as possible, and allocate new >> ones only if needed. [...] >> +u32 napi_skb_cache_get_bulk(void **skbs, u32 n) >> +{ >> + struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache); >> + u32 bulk, total = n; >> + >> + local_lock_nested_bh(&napi_alloc_cache.bh_lock); >> + >> + if (nc->skb_count >= n) >> + goto get; > > I (mis?)understood from the commit message this condition should be > likely, too?!? It depends, I didn't want to make this unlikely() as will happen sometimes anyway, while the two unlikely() below can happen only on when the system is low on memory. > >> + /* No enough cached skbs. Try refilling the cache first */ >> + bulk = min(NAPI_SKB_CACHE_SIZE - nc->skb_count, NAPI_SKB_CACHE_BULK); >> + nc->skb_count += kmem_cache_alloc_bulk(net_hotdata.skbuff_cache, >> + GFP_ATOMIC | __GFP_NOWARN, bulk, >> + &nc->skb_cache[nc->skb_count]); >> + if (likely(nc->skb_count >= n)) >> + goto get; >> + >> + /* Still not enough. Bulk-allocate the missing part directly, zeroed */ >> + n -= kmem_cache_alloc_bulk(net_hotdata.skbuff_cache, >> + GFP_ATOMIC | __GFP_ZERO | __GFP_NOWARN, >> + n - nc->skb_count, &skbs[nc->skb_count]); > > You should probably cap 'n' to NAPI_SKB_CACHE_SIZE. Also what about > latency spikes when n == 48 (should be the maximum possible with such > limit) here? The current users never allocate more than 8 skbs in one bulk. Anyway, the current approach wants to be a drop-in for kmem_cache_alloc_bulk(skbuff_cache), which doesn't cap anything. Not that this last branch allocates to @skbs directly, not to the percpu NAPI cache. > > /P Thanks, Olek