From: Matthieu Baerts (NGI0) <matttbe@xxxxxxxxxx> commit f09b0ad55a1196f5891663f8888463c0541059cb upstream. When a peer decides to close one subflow in the middle of a connection having multiple subflows, the receiver of the first FIN should accept that, and close the subflow on its side as well. If not, the subflow will stay half closed, and would even continue to be used until the end of the MPTCP connection or a reset from the network. The issue has not been seen before, probably because the in-kernel path-manager always sends a RM_ADDR before closing the subflow. Upon the reception of this RM_ADDR, the other peer will initiate the closure on its side as well. On the other hand, if the RM_ADDR is lost, or if the path-manager of the other peer only closes the subflow without sending a RM_ADDR, the subflow would switch to TCP_CLOSE_WAIT, but that's it, leaving the subflow half-closed. So now, when the subflow switches to the TCP_CLOSE_WAIT state, and if the MPTCP connection has not been closed before with a DATA_FIN, the kernel owning the subflow schedules its worker to initiate the closure on its side as well. This issue can be easily reproduced with packetdrill, as visible in [1], by creating an additional subflow, injecting a FIN+ACK before sending the DATA_FIN, and expecting a FIN+ACK in return. Fixes: 40947e13997a ("mptcp: schedule worker when subflow is closed") Cc: stable@xxxxxxxxxxxxxxx Link: https://github.com/multipath-tcp/packetdrill/pull/154 [1] Reviewed-by: Mat Martineau <martineau@xxxxxxxxxx> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@xxxxxxxxxx> Link: https://patch.msgid.link/20240826-net-mptcp-close-extra-sf-fin-v1-1-905199fe1172@xxxxxxxxxx Signed-off-by: Jakub Kicinski <kuba@xxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- net/mptcp/protocol.c | 5 ++++- net/mptcp/subflow.c | 8 ++++++-- 2 files changed, 10 insertions(+), 3 deletions(-) --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -2533,8 +2533,11 @@ static void __mptcp_close_subflow(struct mptcp_for_each_subflow_safe(msk, subflow, tmp) { struct sock *ssk = mptcp_subflow_tcp_sock(subflow); + int ssk_state = inet_sk_state_load(ssk); - if (inet_sk_state_load(ssk) != TCP_CLOSE) + if (ssk_state != TCP_CLOSE && + (ssk_state != TCP_CLOSE_WAIT || + inet_sk_state_load(sk) != TCP_ESTABLISHED)) continue; /* 'subflow_data_ready' will re-sched once rx queue is empty */ --- a/net/mptcp/subflow.c +++ b/net/mptcp/subflow.c @@ -1255,12 +1255,16 @@ out: /* sched mptcp worker to remove the subflow if no more data is pending */ static void subflow_sched_work_if_closed(struct mptcp_sock *msk, struct sock *ssk) { - if (likely(ssk->sk_state != TCP_CLOSE)) + struct sock *sk = (struct sock *)msk; + + if (likely(ssk->sk_state != TCP_CLOSE && + (ssk->sk_state != TCP_CLOSE_WAIT || + inet_sk_state_load(sk) != TCP_ESTABLISHED))) return; if (skb_queue_empty(&ssk->sk_receive_queue) && !test_and_set_bit(MPTCP_WORK_CLOSE_SUBFLOW, &msk->flags)) - mptcp_schedule_work((struct sock *)msk); + mptcp_schedule_work(sk); } static bool subflow_can_fallback(struct mptcp_subflow_context *subflow) Patches currently in stable-queue which might be from matttbe@xxxxxxxxxx are queue-6.10/selftests-mptcp-join-cannot-rm-sf-if-closed.patch queue-6.10/mptcp-pm-fix-rm_addr-id-for-the-initial-subflow.patch queue-6.10/mptcp-pm-fix-id-0-endp-usage-after-multiple-re-creations.patch queue-6.10/mptcp-pm-skip-connecting-to-already-established-sf.patch queue-6.10/mptcp-pm-reuse-id-0-after-delete-and-re-add.patch queue-6.10/mptcp-avoid-duplicated-sub_closed-events.patch queue-6.10/mptcp-pm-add_addr-0-is-not-a-new-address.patch queue-6.10/mptcp-pm-do-not-remove-already-closed-subflows.patch queue-6.10/mptcp-pm-send-ack-on-an-active-subflow.patch queue-6.10/mptcp-pm-reset-mpc-endp-id-when-re-added.patch queue-6.10/mptcp-close-subflow-when-receiving-tcp-fin.patch queue-6.10/selftests-mptcp-join-check-re-re-adding-id-0-endp.patch queue-6.10/selftests-mptcp-join-no-extra-msg-if-no-counter.patch queue-6.10/mptcp-pr_debug-add-missing-n-at-the-end.patch queue-6.10/selftests-mptcp-join-check-removing-id-0-endpoint.patch queue-6.10/mptcp-sched-check-both-backup-in-retrans.patch