might_sleep oops in irq_set_affinity_notifier users

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
The sfc, mellanox, and infiniband drivers use the
irq_set_affinity_notifier service.  This causes these
drivers to issue a might_sleep oops on driver load due to
calling schedule_work under a raw spin lock:

int __irq_set_affinity_locked(struct irq_data *data, const struct cpumask *mask)
{
	...
	if (desc->affinity_notify) {
		kref_get(&desc->affinity_notify->kref);
		schedule_work(&desc->affinity_notify->work);
	}
	...
}

int irq_set_affinity(unsigned int irq, const struct cpumask *mask)
{
	...
	raw_spin_lock_irqsave(&desc->lock, flags);
	ret =  __irq_set_affinity_locked(irq_desc_get_irq_data(desc), mask);
	raw_spin_unlock_irqrestore(&desc->lock, flags);
	...
}

I suppose this could be fixed by using tasklets instead of
schedule_work.  But perhaps it might be better to modify
work queues to use raw locks for the queuing / dequeing
of work rather than sleepy locks; that way, work can be
queued up under both sleepy and atomic contexts under the
rt kernel, just as work can be today queued up under the
nort kernel without any problems.

Joe

PS: failure paths: both sfc and mellanox invoke
irq_cpu_rmap_add which in turn invokes the above
irq_set_affinity lock which oops.  The infiniband driver
invokes irq_set_affinity_lock directly.

PPS: oops was observed under 3.6.11.6-rt38 but a perusal
of 3.10.6-rt3 sources shows it has the same problem.
--
To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [RT Stable]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]

  Powered by Linux