On Tue, 18 Jun 2019 at 06:14, Eric Biggers <ebiggers@xxxxxxxxxx> wrote: > > On Mon, Jun 17, 2019 at 10:09:33AM +0200, Ard Biesheuvel wrote: > > diff --git a/include/linux/tcp.h b/include/linux/tcp.h > > index c23019a3b264..9ea0e71f5c6a 100644 > > --- a/include/linux/tcp.h > > +++ b/include/linux/tcp.h > > @@ -58,12 +58,7 @@ static inline unsigned int tcp_optlen(const struct sk_buff *skb) > > > > /* TCP Fast Open Cookie as stored in memory */ > > struct tcp_fastopen_cookie { > > - union { > > - u8 val[TCP_FASTOPEN_COOKIE_MAX]; > > -#if IS_ENABLED(CONFIG_IPV6) > > - struct in6_addr addr; > > -#endif > > - }; > > + u64 val[TCP_FASTOPEN_COOKIE_MAX / sizeof(u64)]; > > s8 len; > > bool exp; /* In RFC6994 experimental option format */ > > }; > > Is it okay that the cookies will depend on CPU endianness? > That depends on whether keys shared between hosts with different endiannesses are expected to produce cookies that can be shared. > > diff --git a/include/net/tcp.h b/include/net/tcp.h > > index 96e0e53ff440..184930b02779 100644 > > --- a/include/net/tcp.h > > +++ b/include/net/tcp.h > > @@ -1628,9 +1628,9 @@ bool tcp_fastopen_defer_connect(struct sock *sk, int *err); > > > > /* Fastopen key context */ > > struct tcp_fastopen_context { > > - struct crypto_cipher *tfm[TCP_FASTOPEN_KEY_MAX]; > > - __u8 key[TCP_FASTOPEN_KEY_BUF_LENGTH]; > > - struct rcu_head rcu; > > + __u8 key[TCP_FASTOPEN_KEY_MAX][TCP_FASTOPEN_KEY_LENGTH]; > > + int num; > > + struct rcu_head rcu; > > }; > > Why not use 'siphash_key_t' here? Then the (potentially alignment-violating) > cast in __tcp_fastopen_cookie_gen_cipher() wouldn't be needed. > These data structures are always kmalloc'ed so the alignment is never violated in practice. But I do take your point. My idea at the time of the first RFC was that the actual MAC algo should be an implementation detail, and so the key is just a buffer. However, after Eric pointed out that setting the same key across different hosts should produce compatible cookies (module the upgrade scenario), it is true that the algorithm is an externally visible property, so it might be better to change this into siphash_key_t[] here. > > int tcp_fastopen_reset_cipher(struct net *net, struct sock *sk, > > void *primary_key, void *backup_key, > > unsigned int len) > > @@ -115,11 +75,20 @@ int tcp_fastopen_reset_cipher(struct net *net, struct sock *sk, > > struct fastopen_queue *q; > > int err = 0; > > > > - ctx = tcp_fastopen_alloc_ctx(primary_key, backup_key, len); > > - if (IS_ERR(ctx)) { > > - err = PTR_ERR(ctx); > > + ctx = kmalloc(sizeof(*ctx), GFP_KERNEL); > > + if (!ctx) { > > + err = -ENOMEM; > > goto out; > > } > > + > > + memcpy(ctx->key[0], primary_key, len); > > + if (backup_key) { > > + memcpy(ctx->key[1], backup_key, len); > > + ctx->num = 2; > > + } else { > > + ctx->num = 1; > > + } > > + > > spin_lock(&net->ipv4.tcp_fastopen_ctx_lock); > > if (sk) { > > q = &inet_csk(sk)->icsk_accept_queue.fastopenq; > > Shouldn't there be a check that 'len == TCP_FASTOPEN_KEY_LENGTH'? I see that > all callers pass that, but it seems unnecessarily fragile for this to accept > short lengths and leave uninitialized memory in that case. > Sure, I can add back the error handling path the previously handled any errors from crypto_cipher_setkey() [which would perform the input length checking in that case] I'll spin an incremental patch covering the above.