On Thu, Nov 12, 2020 at 12:41 PM Björn Töpel <bjorn.topel@xxxxxxxxx> wrote: > > From: Björn Töpel <bjorn.topel@xxxxxxxxx> > > The existing busy-polling mode, enabled by the SO_BUSY_POLL socket > option or system-wide using the /proc/sys/net/core/busy_read knob, is > an opportunistic. That means that if the NAPI context is not > scheduled, it will poll it. If, after busy-polling, the budget is > exceeded the busy-polling logic will schedule the NAPI onto the > regular softirq handling. > > One implication of the behavior above is that a busy/heavy loaded NAPI > context will never enter/allow for busy-polling. Some applications > prefer that most NAPI processing would be done by busy-polling. > > This series adds a new socket option, SO_PREFER_BUSY_POLL, that works > in concert with the napi_defer_hard_irqs and gro_flush_timeout > knobs. The napi_defer_hard_irqs and gro_flush_timeout knobs were > introduced in commit 6f8b12d661d0 ("net: napi: add hard irqs deferral > feature"), and allows for a user to defer interrupts to be enabled and > instead schedule the NAPI context from a watchdog timer. When a user > enables the SO_PREFER_BUSY_POLL, again with the other knobs enabled, > and the NAPI context is being processed by a softirq, the softirq NAPI > processing will exit early to allow the busy-polling to be performed. > > If the application stops performing busy-polling via a system call, > the watchdog timer defined by gro_flush_timeout will timeout, and > regular softirq handling will resume. > > In summary; Heavy traffic applications that prefer busy-polling over > softirq processing should use this option. > > Example usage: > > $ echo 2 | sudo tee /sys/class/net/ens785f1/napi_defer_hard_irqs > $ echo 200000 | sudo tee /sys/class/net/ens785f1/gro_flush_timeout > > Note that the timeout should be larger than the userspace processing > window, otherwise the watchdog will timeout and fall back to regular > softirq processing. > > Enable the SO_BUSY_POLL/SO_PREFER_BUSY_POLL options on your socket. > > Signed-off-by: Björn Töpel <bjorn.topel@xxxxxxxxx> ... > diff --git a/net/core/sock.c b/net/core/sock.c > index 727ea1cc633c..248f6a763661 100644 > --- a/net/core/sock.c > +++ b/net/core/sock.c > @@ -1159,6 +1159,12 @@ int sock_setsockopt(struct socket *sock, int level, int optname, > sk->sk_ll_usec = val; > } > break; > + case SO_PREFER_BUSY_POLL: > + if (valbool && !capable(CAP_NET_ADMIN)) > + ret = -EPERM; > + else > + sk->sk_prefer_busy_poll = valbool; WRITE_ONCE(sk->sk_prefer_busy_poll, valbool); So that KCSAN is happy while readers read this field while socket is not locked. > + break; > #endif > >