Re: [PATCH net v4 3/3] net: prevent address rewrite in kernel_bind()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2023-09-21 at 09:30 -0400, Willem de Bruijn wrote:
> On Thu, Sep 21, 2023 at 4:35 AM Paolo Abeni <pabeni@xxxxxxxxxx> wrote:
> > 
> > On Wed, 2023-09-20 at 09:30 -0400, Willem de Bruijn wrote:
> > > Jordan Rife wrote:
> > > > Similar to the change in commit 0bdf399342c5("net: Avoid address
> > > > overwrite in kernel_connect"), BPF hooks run on bind may rewrite the
> > > > address passed to kernel_bind(). This change
> > > > 
> > > > 1) Makes a copy of the bind address in kernel_bind() to insulate
> > > >    callers.
> > > > 2) Replaces direct calls to sock->ops->bind() with kernel_bind()
> > > > 
> > > > Link: https://lore.kernel.org/netdev/20230912013332.2048422-1-jrife@xxxxxxxxxx/
> > > > Fixes: 4fbac77d2d09 ("bpf: Hooks for sys_bind")
> > > > Cc: stable@xxxxxxxxxxxxxxx
> > > > Signed-off-by: Jordan Rife <jrife@xxxxxxxxxx>
> > > 
> > > Reviewed-by: Willem de Bruijn <willemb@xxxxxxxxxx>
> > 
> > I fear this is going to cause a few conflicts with other trees. We can
> > still take it, but at very least we will need some acks from the
> > relevant maintainers.
> > 
> > I *think* it would be easier split this and patch 1/3 in individual
> > patches targeting the different trees, hopefully not many additional
> > patches will be required. What do you think?
> 
> Roughly how many patches would result from this one patch. From the
> stat line I count { block/drbd, char/agp, infiniband, isdn, fs/dlm,
> fs/ocfs2, fs/smb, netfilter, rds }. That's worst case nine callers
> plus the core patch to net/socket.c?

I think there should not be problems taking directly changes for rds
and nf/ipvs.

Additionally, I think the non network changes could consolidate the
bind and connect changes in a single patch.

It should be 7 not-network patches overall.

> If logistically simpler and you prefer the approach, we can also
> revisit Jordan's original approach, which embedded the memcpy inside
> the BPF branches.
> 
> That has the slight benefit to in-kernel callers that it limits the
> cost of the memcpy to cgroup_bpf_enabled. But adds a superfluous
> second copy to the more common userspace callers, again at least only
> if cgroup_bpf_enabled.
> 
> If so, it should at least move the whole logic around those BPF hooks
> into helper functions.

IMHO the approach implemented here is preferable, I suggest going
forward with it.

Thanks,

Paolo





[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux