Willem de Bruijn wrote: > I've been playing with sockmap and ktls. They're fantastic tools. Great, glad to get more eyes on it. Thanks. > Combining them I did run into a few issues. Would like to understand > whether (a) it's just me, else (b) whether these are known issues and > (c) some feedback on an initial hacky patch. There are still a few outstanding things I need to flush out of my queue listed below. > > My test [1] sets up an echo request/response between a client and > server, optionally interposed by an "icept" guard process on each side > and optionally enabling ktls between the icept processes. > > Without ktls, most variants of interpositioning {iptables, iptables + > splice(), iptables + sockmap splice, sk_msg to icept tx } work. > > Only sk_msg redirection to icept ingress with BPF_F_INGRESS does not > if the destination socket has a verdict program. I *think* this is > intentional, judging from commit 552de9106882 ("bpf: sk_msg, fix > socket data_ready events") explicitly ensuring that the process gets > awoken on new data if a socket has a verdict program and another > socket redirects to it, as opposed to passing it to the program. Right. > > For this workload, more interesting is sk_msg directly to icept > egress, anyway. This works without ktls. Support for ktls is added in > commit d3b18ad31f93 ("tls: add bpf support to sk_msg handling"). The > relevant callback function tls_sw_sendpage_locked was not immediately > used and subsequently removed in commit cc1dbdfed023 ("Revert > "net/tls: remove unused function tls_sw_sendpage_locked""). It appears > to work once reverting that change, plus registering the function I don't fully understand this. Are you saying a BPF_SK_MSG_VERDICT program attach to a ktls socket is not being called? Or packets are being dropped or ...? Or that the program doesn't work even with just KTLS and no bpf involved. > > @@ -859,6 +861,7 @@ static int __init tls_register(void) > > tls_sw_proto_ops = inet_stream_ops; > tls_sw_proto_ops.splice_read = tls_sw_splice_read; > + tls_sw_proto_ops.sendpage_locked = tls_sw_sendpage_locked, > > and additionally allowing MSG_NO_SHARED_FRAGS: > > int tls_sw_sendpage_locked(struct sock *sk, struct page *page, > int offset, size_t size, int flags) > { > if (flags & ~(MSG_MORE | MSG_DONTWAIT | MSG_NOSIGNAL | > - MSG_SENDPAGE_NOTLAST | MSG_SENDPAGE_NOPOLICY)) > + MSG_SENDPAGE_NOTLAST | > MSG_SENDPAGE_NOPOLICY | MSG_NO_SHARED_FRAGS)) > return -ENOTSUPP; > If you had added MSG_NO_SHARED_FRAGS to the existing tls_sw_sendpage would that have been sufficient? > and not registering parser+verdict programs on the destination socket. > Note that without ktls this mode also works with such programs > attached. Right ingress + ktls is known to be broken at the moment. Also I have plans to cleanup ingress side at some point. The current model is a bit clumsy IMO. The workqueue adds latency spikes on the 99+ percentiles. At this point it makes the ingress side similar to the egress side without a workqueue and with verdict+parser done in a single program. > > Lastly, sockmap splicing from icept ingress to egress (no sk_msg) also > stops working when I enable ktls on the egress socket. I'm taking a > look at that next. But this email is long enough already ;) Yes this is a known bug I've got a set of patches to address this. I've been trying to get to it for awhile now and just resolved a few other things on my side so plan to do this Monday/Tuesday next week. FWIW there is also a bugfix on the way to resolve a case where we receive a FIN/RST flag after redirecting data causing dropped data. > > Thanks for having a look! > > Willem > > [1] https://github.com/wdebruij/kerneltools/tree/icept.2 > > probably more readable is the stack of commits, one per feature: > > c86c112 icept: initial client/server test > 727a8ae icept: add iptables interception > 60c34b2 icept: add splice interception > 03a516a icept: add sockmap interception > c9c6103 icept: run client and server in cgroup > 579bcae icept: add skmsg interception > e1b0d17 icept: add kTLS