Re: [Patch bpf] sock_map: convert cancel_work_sync() to cancel_work()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jakub Sitnicki wrote:
> On Tue, Nov 01, 2022 at 01:01 PM -07, John Fastabend wrote:
> > Jakub Sitnicki wrote:
> >> On Fri, Oct 28, 2022 at 12:16 PM -07, Cong Wang wrote:
> >> > On Mon, Oct 24, 2022 at 03:33:13PM +0200, Jakub Sitnicki wrote:
> >> >> On Tue, Oct 18, 2022 at 11:13 AM -07, sdf@xxxxxxxxxx wrote:
> >> >> > On 10/17, Cong Wang wrote:
> >> >> >> From: Cong Wang <cong.wang@xxxxxxxxxxxxx>
> >> >> >
> >> >> >> Technically we don't need lock the sock in the psock work, but we
> >> >> >> need to prevent this work running in parallel with sock_map_close().
> >> >> >
> >> >> >> With this, we no longer need to wait for the psock->work synchronously,
> >> >> >> because when we reach here, either this work is still pending, or
> >> >> >> blocking on the lock_sock(), or it is completed. We only need to cancel
> >> >> >> the first case asynchronously, and we need to bail out the second case
> >> >> >> quickly by checking SK_PSOCK_TX_ENABLED bit.
> >> >> >
> >> >> >> Fixes: 799aa7f98d53 ("skmsg: Avoid lock_sock() in sk_psock_backlog()")
> >> >> >> Reported-by: Stanislav Fomichev <sdf@xxxxxxxxxx>
> >> >> >> Cc: John Fastabend <john.fastabend@xxxxxxxxx>
> >> >> >> Cc: Jakub Sitnicki <jakub@xxxxxxxxxxxxxx>
> >> >> >> Signed-off-by: Cong Wang <cong.wang@xxxxxxxxxxxxx>
> >> >> >
> >> >> > This seems to remove the splat for me:
> >> >> >
> >> >> > Tested-by: Stanislav Fomichev <sdf@xxxxxxxxxx>
> >> >> >
> >> >> > The patch looks good, but I'll leave the review to Jakub/John.
> >> >> 
> >> >> I can't poke any holes in it either.
> >> >> 
> >> >> However, it is harder for me to follow than the initial idea [1].
> >> >> So I'm wondering if there was anything wrong with it?
> >> >
> >> > It caused a warning in sk_stream_kill_queues() when I actually tested
> >> > it (after posting).
> >> 
> >> We must have seen the same warnings. They seemed unrelated so I went
> >> digging. We have a fix for these [1]. They were present since 5.18-rc1.
> >> 
> >> >> This seems like a step back when comes to simplifying locking in
> >> >> sk_psock_backlog() that was done in 799aa7f98d53.
> >> >
> >> > Kinda, but it is still true that this sock lock is not for sk_socket
> >> > (merely for closing this race condition).
> >> 
> >> I really think the initial idea [2] is much nicer. I can turn it into a
> >> patch, if you are short on time.
> >> 
> >> With [1] and [2] applied, the dead lock and memory accounting warnings
> >> are gone, when running `test_sockmap`.
> >> 
> >> Thanks,
> >> Jakub
> >> 
> >> [1] https://lore.kernel.org/netdev/1667000674-13237-1-git-send-email-wangyufen@xxxxxxxxxx/
> >> [2] https://lore.kernel.org/netdev/Y0xJUc%2FLRu8K%2FAf8@pop-os.localdomain/
> >
> > Cong, what do you think? I tend to agree [2] looks nicer to me.
> >
> > @Jakub,
> >
> > Also I think we could simply drop the proposed cancel_work_sync in
> > sock_map_close()?
> >
> >  }
> > @@ -1619,9 +1619,10 @@ void sock_map_close(struct sock *sk, long timeout)
> >  	saved_close = psock->saved_close;
> >  	sock_map_remove_links(sk, psock);
> >  	rcu_read_unlock();
> > -	sk_psock_stop(psock, true);
> > -	sk_psock_put(sk, psock);
> > +	sk_psock_stop(psock);
> >  	release_sock(sk);
> > +	cancel_work_sync(&psock->work);
> > +	sk_psock_put(sk, psock);
> >  	saved_close(sk, timeout);
> >  }
> >
> > The sk_psock_put is going to cancel the work before destroying the psock,
> >
> >  sk_psock_put()
> >    sk_psock_drop()
> >      queue_rcu_work(system_wq, psock->rwork)
> >
> > and then in callback we
> >
> >   sk_psock_destroy()
> >     cancel_work_synbc(psock->work)
> >
> > although it might be nice to have the work cancelled earlier rather than
> > latter maybe.
> 
> Good point.
> 
> I kinda like the property that once close() returns we know there is no
> deferred work running for the socket.
> 
> I find the APIs where a deferred cleanup happens sometimes harder to
> write tests for.
> 
> But I don't really have a strong opinion here.

I don't either and Cong left it so I'm good with that.

Reviewing backlog logic though I think there is another bug there, but
I haven't been able to trigger it in any of our tests.

The sk_psock_backlog() logic is,

 sk_psock_backlog(struct work_struct *work)
   mutex_lock()
   while (skb = ...)
   ...
   do {
     ret = sk_psock_handle_skb()
     if (ret <= 0) {
       if (ret == -EAGAIN) {
           sk_psock_skb_state()
           goto  end;
       } 
      ...
   } while (len);
   ...
  end:
   mutex_unlock()

what I'm not seeing is if we get an EAGAIN through sk_psock_handle_skb
how do we schedule the backlog again. For egress we would set the
SOCK_NOSPACE bit and then get a write space available callback which
would do the schedule(). The ingress side could fail with EAGAIN
through the alloc_sk_msg(GFP_ATOMIC) call. This is just a kzalloc,

   sk_psock_handle_skb()
    sk_psock_skb_ingress()
     sk_psock_skb_ingress_self()
       msg = alloc_sk_msg()
               kzalloc()          <- this can return NULL
       if (!msg)
          return -EAGAIN          <- could we stall now


I think we could stall here if there was nothing else to kick it. I
was thinking about this maybe,

diff --git a/net/core/skmsg.c b/net/core/skmsg.c
index 1efdc47a999b..b96e95625027 100644
--- a/net/core/skmsg.c
+++ b/net/core/skmsg.c
@@ -624,13 +624,20 @@ static int sk_psock_handle_skb(struct sk_psock *psock, struct sk_buff *skb,
 static void sk_psock_skb_state(struct sk_psock *psock,
                               struct sk_psock_work_state *state,
                               struct sk_buff *skb,
-                              int len, int off)
+                              int len, int off, bool ingress)
 {
        spin_lock_bh(&psock->ingress_lock);
        if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED)) {
                state->skb = skb;
                state->len = len;
                state->off = off;
+               /* For ingress we may not have a wakeup callback to trigger
+                * the reschedule on so need to reschedule retry. For egress
+                * we will get TCP stack callback when its a good time to
+                * retry.
+                */
+               if (ingress)
+                       schedule_work(&psock->work);
        } else {
                sock_drop(psock->sk, skb);
        }
@@ -678,7 +685,7 @@ static void sk_psock_backlog(struct work_struct *work)
                        if (ret <= 0) {
                                if (ret == -EAGAIN) {
                                        sk_psock_skb_state(psock, state, skb,
-                                                          len, off);
+                                                          len, off, ingress);
                                        goto end;
                                }
                                /* Hard errors break pipe and stop xmit. */


Its tempting to try and use the memory pressure callbacks but those are
built for the skb cache so I think overloading them is not so nice. The
drawback to above is its possible no memory is available even when we
get back to the backlog. We could use a delayed reschedule but its not
clear what delay makes sense here. Maybe some backoff...

Any thoughts?

Thanks,
John



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux