From: Martin KaFai Lau <kafai@xxxxxx> Date: Thu, 10 Dec 2020 11:33:40 -0800 > On Thu, Dec 10, 2020 at 02:58:10PM +0900, Kuniyuki Iwashima wrote: > > [ ... ] > > > > > I've implemented one-by-one migration only for the accept queue for now. > > > > In addition to the concern about TFO queue, > > > You meant this queue: queue->fastopenq.rskq_rst_head? > > > > Yes. > > > > > > > Can "req" be passed? > > > I did not look up the lock/race in details for that though. > > > > I think if we rewrite freeing TFO requests part like one of accept queue > > using reqsk_queue_remove(), we can also migrate them. > > > > In this patchset, selecting a listener for accept queue, the TFO queue of > > the same listener is also migrated to another listener in order to prevent > > TFO spoofing attack. > > > > If the request in the accept queue is migrated one by one, I am wondering > > which should the request in TFO queue be migrated to prevent attack or > > freed. > > > > I think user need not know about keeping such requests in kernel to prevent > > attacks, so passing them to eBPF prog is confusing. But, redistributing > > them randomly without user's intention can make some irrelevant listeners > > unnecessarily drop new TFO requests, so this is also bad. Moreover, freeing > > such requests seems not so good in the point of security. > The current behavior (during process restart) is also not carrying this > security queue. Not carrying them in this patch will make it > less secure than the current behavior during process restart? No, I thought I could make it more secure. > Do you need it now or it is something that can be considered for later > without changing uapi bpf.h? No, I do not need it for any other reason, so I will simply free the requests in TFO queue. Thank you. > > > > ---8<--- > > > > diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c > > > > index a82fd4c912be..d0ddd3cb988b 100644 > > > > --- a/net/ipv4/inet_connection_sock.c > > > > +++ b/net/ipv4/inet_connection_sock.c > > > > @@ -1001,6 +1001,29 @@ struct sock *inet_csk_reqsk_queue_add(struct sock *sk, > > > > } > > > > EXPORT_SYMBOL(inet_csk_reqsk_queue_add); > > > > > > > > +static bool inet_csk_reqsk_queue_migrate(struct sock *sk, struct sock *nsk, struct request_sock *req) > > > > +{ > > > > + struct request_sock_queue *queue = &inet_csk(nsk)->icsk_accept_queue; > > > > + bool migrated = false; > > > > + > > > > + spin_lock(&queue->rskq_lock); > > > > + if (likely(nsk->sk_state == TCP_LISTEN)) { > > > > + migrated = true; > > > > + > > > > + req->dl_next = NULL; > > > > + if (queue->rskq_accept_head == NULL) > > > > + WRITE_ONCE(queue->rskq_accept_head, req); > > > > + else > > > > + queue->rskq_accept_tail->dl_next = req; > > > > + queue->rskq_accept_tail = req; > > > > + sk_acceptq_added(nsk); > > > > + inet_csk_reqsk_queue_migrated(sk, nsk, req); > > > need to first resolve the question raised in patch 5 regarding > > > to the update on req->rsk_listener though. > > > > In the unhash path, it is also safe to call sock_put() for the old listner. > > > > In inet_csk_listen_stop(), the sk_refcnt of the listener >= 1. If the > > listener does not have immature requests, sk_refcnt is 1 and freed in > > __tcp_close(). > > > > sock_hold(sk) in __tcp_close() > > sock_put(sk) in inet_csk_destroy_sock() > > sock_put(sk) in __tcp_clsoe() > I don't see how it is different here than in patch 5. > I could be missing something. > > Lets contd the discussion on the other thread (patch 5) first. The listening socket has two kinds of refcounts for itself(1) and requests(n). I think the listener has its own refcount at least in inet_csk_listen_stop(), so sock_put() here never free the listener.