On Tue, Oct 17, 2017 at 09:44:10AM -0700, Eric Dumazet wrote: > On Tue, Oct 17, 2017 at 9:28 AM, Marcelo Ricardo Leitner > <marcelo.leitner@xxxxxxxxx> wrote: > > On Tue, Oct 17, 2017 at 11:31:30PM +0800, Xin Long wrote: > >> On Tue, Oct 17, 2017 at 9:45 PM, Eric Dumazet <eric.dumazet@xxxxxxxxx> wrote: > >> > SCTP experts. > >> > > >> > syszkaller reported a few crashes in sctp_packet_config() with invalid > >> > access to a deleted dst. > >> > > >> > The rcu_read_lock() in sctp_packet_config() is suspect. > >> > > >> > It does not protect anything at the moment. > >> > > >> > If we expect tp->dst to be manipulated/changed by another cpu/thread, > >> > then we need proper rcu protection. > >> > > >> > Following patch to show what would be a minimal change (but obviously > >> > bigger changes are needed, like sctp_transport_pmtu_check() and > >> > sctp_transport_dst_check(), and proper sparse annotations) > >> will check all places accessing tp->dst in sctp. > > > > I checked some and sctp_transport_dst_check() should be fine because > > by then we are holding a reference on dst. Same goes to > > sctp_transport_pmtu_check(). > > Really ? > Yes, > What about sctp_v4_err() -> sctp_icmp_redirect() -> sctp_transport_dst_check() > > It seems quite possible that the BH handler can access it, while > socket is owned by user. hidden here: sctp_v4_err() { ... sk = sctp_err_lookup(net, AF_INET, skb, sctp_hdr(skb), &asoc, &transport); ... out_unlock: sctp_err_finish(sk, transport); } sctp_err_lookup() { ... bh_lock_sock(sk); /* If too many ICMPs get dropped on busy * servers this needs to be solved differently. */ if (sock_owned_by_user(sk)) [A] __NET_INC_STATS(net, LINUX_MIB_LOCKDROPPEDICMPS); *app = asoc; *tpp = transport; return sk; ... } Though that if() on [A] should be bailing out without returning nothing. That's a bug. More like: if (sock_owned_by_user(sk)) { __NET_INC_STATS(net, LINUX_MIB_LOCKDROPPEDICMPS); goto out; } -- To unsubscribe from this list: send the line "unsubscribe linux-sctp" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html