On Sat, Dec 17, 2022 at 01:59:32AM +0000, Al Viro wrote: > On Fri, Dec 16, 2022 at 03:54:09PM -0800, Boqun Feng wrote: > > On Fri, Dec 16, 2022 at 11:39:21PM +0000, Al Viro wrote: > > > [Boqun Feng Cc'd] > > > > > > On Fri, Dec 16, 2022 at 03:26:21AM -0800, Linus Torvalds wrote: > > > > On Thu, Dec 15, 2022 at 7:41 PM Al Viro <viro@xxxxxxxxxxxxxxxxxx> wrote: > > > > > > > > > > CPU1: ptrace(2) > > > > > ptrace_check_attach() > > > > > read_lock(&tasklist_lock); > > > > > > > > > > CPU2: setpgid(2) > > > > > write_lock_irq(&tasklist_lock); > > > > > spins > > > > > > > > > > CPU1: takes an interrupt that would call kill_fasync(). grep and the > > > > > first instance of kill_fasync() is in hpet_interrupt() - it's not > > > > > something exotic. IRQs disabled on CPU2 won't stop it. > > > > > kill_fasync(..., SIGIO, ...) > > > > > kill_fasync_rcu() > > > > > read_lock_irqsave(&fa->fa_lock, flags); > > > > > send_sigio() > > > > > read_lock_irqsave(&fown->lock, flags); > > > > > read_lock(&tasklist_lock); > > > > > > > > > > ... and CPU1 spins as well. > > > > > > > > Nope. See kernel/locking/qrwlock.c: > > > > > > [snip rwlocks are inherently unfair, queued ones are somewhat milder, but > > > all implementations have writers-starving behaviour for read_lock() at least > > > when in_interrupt()] > > > > > > D'oh... Consider requested "Al, you are a moron" duly delivered... I plead > > > having been on way too low caffeine and too little sleep ;-/ > > > > > > Looking at the original report, looks like the scenario there is meant to be > > > the following: > > > > > > CPU1: read_lock(&tasklist_lock) > > > tasklist_lock grabbed > > > > > > CPU2: get an sg write(2) feeding request to libata; host->lock is taken, > > > request is immediately completed and scsi_done() is about to be called. > > > host->lock grabbed > > > > > > CPU3: write_lock_irq(&tasklist_lock) > > > spins on tasklist_lock until CPU1 gets through. > > > > > > CPU2: get around to kill_fasync() called by sg_rq_end_io() and to grabbing > > > tasklist_lock inside send_sigio() > > > spins, since it's not in an interrupt and there's a pending writer > > > host->lock is held, spin until CPU3 gets through. > > > > Right, for a reader not in_interrupt(), it may be blocked by a random > > waiting writer because of the fairness, even the lock is currently held > > by a reader: > > > > CPU 1 CPU 2 CPU 3 > > read_lock(&tasklist_lock); // get the lock > > > > write_lock_irq(&tasklist_lock); // wait for the lock > > > > read_lock(&tasklist_lock); // cannot get the lock because of the fairness > > IOW, any caller of scsi_done() from non-interrupt context while > holding a spinlock that is also taken in an interrupt... > > And we have drivers/scsi/scsi_error.c:scsi_send_eh_cmnd(), which calls > ->queuecommand() under a mutex, with > #define DEF_SCSI_QCMD(func_name) \ > int func_name(struct Scsi_Host *shost, struct scsi_cmnd *cmd) \ > { \ > unsigned long irq_flags; \ > int rc; \ > spin_lock_irqsave(shost->host_lock, irq_flags); \ > rc = func_name##_lck(cmd); \ > spin_unlock_irqrestore(shost->host_lock, irq_flags); \ > return rc; \ > } > > being commonly used for ->queuecommand() instances. So any scsi_done() > in foo_lck() (quite a few of such) + use of ->host_lock in interrupt > for the same driver (also common)... > > I wonder why that hadn't triggered the same warning a long time > ago - these warnings had been around for at least two years. > FWIW, the complete dependency chain is: &host->lock --> &new->fa_lock --> &f->f_owner.lock --> tasklist_lock for the "&f->f_owner.lock" part to get into lockdep's radar, the following call trace needs to appear once: kill_fasync(): kill_fasync_rcu(): send_sigio() not sure whether it's rare or not though. And ->fa_lock also had its own issue: https://lore.kernel.org/lkml/20210702091831.615042-1-desmondcheongzx@xxxxxxxxx/ which may have covered &host->lock for a while ;-) Regards, Boqun > Am I missing something here?