On Thu, 2018-08-16 at 12:26 +0200, Lukas Wunner wrote: > On Thu, Aug 16, 2018 at 08:10:28PM +1000, Benjamin Herrenschmidt wrote: > > On Thu, 2018-08-16 at 14:52 +0530, Hari Vyas wrote: > > > There was an issue reported by my colleague srinath while enabling pci > > > bridge and a race condition was happening while setting memory and > > > master bits i.e. bits were over-written. > > > As per my understanding is_busmaster and is_added bit race issue was > > > at internal data management and is quite different from pci bridge > > > enabling issue. > > > Am I missing some thing ? Would be interested to know what exactly was > > > affected due to is_busmaster fix. > > > > The is_busmaster fix isn't I think affecting anything, however I don't > > like the use of atomics for these things. It's a band-aid. If we grow a > > proper pci_dev mutex, which is what I'm introducing here, it should be > > able to also handle the is_added race etc.. > > What is your rationale to introduce an additional mutex instead if > utilizing the existing mutex in struct device via device_lock() / > device_unlock() or alternatively pci_dev_lock() / pci_dev_unlock()? > > This is also what Bjorn had suggested here: > https://lore.kernel.org/lkml/20170816134354.GV32525@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/ The device_lock() is really meant for the core device model (drivers/base/*) internal synchronization. I'm rather weary of extending its use to drivers. In the specific PCI case, the obvious reason is that probe() is already called with that held. Thus we have an immediate deadlock if we try to take it in pci_enable_device() for example on the device itself. We could require that pci_enable_device() is always called with the lock already held, and only take it for the parents when walking the bus hierarchy but frankly, this sort of implicit locking requirements are a rabbit hole and will lead to all sort of interesting bugs and extra driver auditing requirements. Cheers, Ben.