Re: [PATCH net-next v15 03/14] netdev: support binding dma-buf to netdevice

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 3 Jul 2024 09:56:45 -0700 Mina Almasry wrote:
> > Is this really sufficient in terms of locking? @binding is not
> > RCU-protected and neither is the reader guaranteed to be in
> > an RCU critical section. Actually the "reader" tries to take a ref
> > and use this struct so it's not even a pure reader.
> >
> > Let's add a lock or use one of the existing locks
> 
> Can we just use rtnl_lock() for this synchronization? It seems it's
> already locked everywhere that access mp_params.mp_priv, so the
> WRITE/READ_ONCE are actually superfluous. Both the dmabuf bind/unbind
> already lock rtnl_lock, and the only other place that access
> mp_params.mp_priv is page_pool_init(). I think it's reasonable to
> assume rtnl_lock is also held during page_pool_init, no? AFAICT it
> would be very weird for some code path to be reconfiguring the driver
> page_pools without holding rtnl_lock?
> 
> What I wanna do here is delete the incorrect comment, remove the
> READ/WRITE_ONCE, and maybe add a DEBUG_NET_WARN_ON(!rtnl_is_locked())
> in mp_dmabuf_devmem_init() but probably that is too defensive.

The only concern I have is driver error recovery paths. They may be
async and may happen outside of rtnl_lock. Same situation we have
with the queue <> NAPI <> IRQ mapping helpers. queue <> NAPI <> IRQ
helpers require rtnl_lock today, and Intel recently had a number of
fixes because that complicates their error recovery paths.

But I guess any locking here will take non-trivial amount of analysis.
Let's go with rtnl_lock, that's fine.



[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux