On 10/21/24 20:31, David Wei wrote: > On 2024-10-21 07:40, Paolo Abeni wrote: >> On 10/16/24 20:52, David Wei wrote: >>> @@ -540,6 +562,34 @@ static const struct memory_provider_ops io_uring_pp_zc_ops = { >>> .scrub = io_pp_zc_scrub, >>> }; >>> >>> +static void io_napi_refill(void *data) >>> +{ >>> + struct io_zc_refill_data *rd = data; >>> + struct io_zcrx_ifq *ifq = rd->ifq; >>> + netmem_ref netmem; >>> + >>> + if (WARN_ON_ONCE(!ifq->pp)) >>> + return; >>> + >>> + netmem = page_pool_alloc_netmem(ifq->pp, GFP_ATOMIC | __GFP_NOWARN); >>> + if (!netmem) >>> + return; >>> + if (WARN_ON_ONCE(!netmem_is_net_iov(netmem))) >>> + return; >>> + >>> + rd->niov = netmem_to_net_iov(netmem); >>> +} >>> + >>> +static struct net_iov *io_zc_get_buf_task_safe(struct io_zcrx_ifq *ifq) >>> +{ >>> + struct io_zc_refill_data rd = { >>> + .ifq = ifq, >>> + }; >>> + >>> + napi_execute(ifq->napi_id, io_napi_refill, &rd); >> >> Under UDP flood the above has unbounded/unlimited execution time, unless >> you set NAPI_STATE_PREFER_BUSY_POLL. Is the allocation schema here >> somehow preventing such unlimited wait? > > Hi Paolo. Do you mean that under UDP flood, napi_execute() will have > unbounded execution time because napi_state_start_busy_polling() and > need_resched() will always return false? My understanding is that > need_resched() will eventually kick the caller task out of > napi_execute(). Sorry for the short reply. Let's try to consolidate this discussion on patch 8, which is strictly related had has the relevant code more handy. Thanks, Paolo