On Mon, Nov 23, 2020 at 3:11 PM Magnus Karlsson <magnus.karlsson@xxxxxxxxx> wrote: > > On Wed, Nov 18, 2020 at 9:26 AM Xuan Zhuo <xuanzhuo@xxxxxxxxxxxxxxxxx> wrote: > > > > datagram_poll will judge the current socket status (EPOLLIN, EPOLLOUT) > > based on the traditional socket information (eg: sk_wmem_alloc), but > > this does not apply to xsk. So this patch uses sock_poll_wait instead of > > datagram_poll, and the mask is calculated by xsk_poll. > > > > Signed-off-by: Xuan Zhuo <xuanzhuo@xxxxxxxxxxxxxxxxx> > > --- > > net/xdp/xsk.c | 4 +++- > > 1 file changed, 3 insertions(+), 1 deletion(-) > > > > diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c > > index cfbec39..7f0353e 100644 > > --- a/net/xdp/xsk.c > > +++ b/net/xdp/xsk.c > > @@ -477,11 +477,13 @@ static int xsk_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len) > > static __poll_t xsk_poll(struct file *file, struct socket *sock, > > struct poll_table_struct *wait) > > { > > - __poll_t mask = datagram_poll(file, sock, wait); > > + __poll_t mask = 0; > > It would indeed be nice to not execute a number of tests in > datagram_poll that will never be triggered. It will speed up things > for sure. But we need to make sure that removing those flags that > datagram_poll sets do not have any bad effects in the code above this. > But let us tentatively keep this patch for the next version of the > patch set. Just need to figure out how to solve your problem in a nice > way first. See discussion in patch 0/3. > > > struct sock *sk = sock->sk; > > struct xdp_sock *xs = xdp_sk(sk); > > struct xsk_buff_pool *pool; > > > > + sock_poll_wait(file, sock, wait); > > + > > if (unlikely(!xsk_is_bound(xs))) > > return mask; > > > > -- > > 1.8.3.1 > > The fix looks correct and it will speed things up too as a bonus. Please include this patch in the v2 as outlined in my answer to 0/3. Thanks!