Re: spin_lock_irqsave or spin_lock in work queue handlers?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 5, 2008 at 4:34 AM, Lukas Razik <linux@xxxxxxxxxx> wrote:
> Hello all!
>
>  In a work handler (of a work queue) I want to avoid the concurrently
>  usage of a socket (because there can be more worker-threads on a SMP
>  system)...
>
>  My first idea was to use spin_lock_irqsave in the work handler:
>  ...
>  spin_lock_irqsave(&tx_d->lock, flags);
>  sock_sendmsg(...);
>  spin_unlock_irqrestore(&tx_d->lock, flags);
>  ...
>  but if I do that, I get this message from the kernel:
>  BUG: warning at kernel/softirq.c:137/local_bh_enable()
>   [<b0121808>] local_bh_enable+0x38/0x79
>   [<b021d445>] lock_sock+0x89/0x91
>   [<b016ebb0>] mntput_no_expire+0x11/0x6a
>   [<b025cc6a>] inet_autobind+0x8/0x52
>   [<b025d5de>] inet_sendmsg+0x1c/0x3f
>   [<b021b1d7>] sock_sendmsg+0xce/0xe8
>   [<b012d65d>] autoremove_wake_function+0x0/0x2d
>   [<b01165c6>] __wake_up_sync+0x31/0x4b
>   [<f0b27c60>] ethos_tx_action+0x1f0/0x280 [ethos]
>   [<b01176ab>] try_to_wake_up+0x355/0x35f
>   [<b012a960>] run_workqueue+0x72/0xaf
>   [<f0b27a70>] ethos_tx_action+0x0/0x280 [ethos]
>   [<b012b220>] worker_thread+0xd9/0x10d
>   [<b01176b5>] default_wake_function+0x0/0xc
>   [<b012b147>] worker_thread+0x0/0x10d
>   [<b012d591>] kthread+0xc2/0xed
>   [<b012d4cf>] kthread+0x0/0xed
>   [<b0101005>] kernel_thread_helper+0x5/0xb
>

>From the stack trace, the warning are generated by the
"WARN_ON_ONCE(irqs_disabled());" line below.   This is because u have
disabled IRQ.  But this function can be used both with and without it
being enabled.   If

void local_bh_enable(void)
{
#ifdef CONFIG_TRACE_IRQFLAGS
        unsigned long flags;

        WARN_ON_ONCE(in_irq());
#endif
        WARN_ON_ONCE(irqs_disabled());

#ifdef CONFIG_TRACE_IRQFLAGS
        local_irq_save(flags);
#endif
        /*
         * Are softirqs going to be turned on now:
         */
        if (softirq_count() == SOFTIRQ_OFFSET)
                trace_softirqs_on((unsigned long)__builtin_return_address(0));
        /*
         * Keep preemption disabled until we are done with
         * softirq processing:
         */
        sub_preempt_count(SOFTIRQ_OFFSET - 1);

        if (unlikely(!in_interrupt() && local_softirq_pending()))
                do_softirq();

        dec_preempt_count();
#ifdef CONFIG_TRACE_IRQFLAGS
        local_irq_restore(flags);
#endif
        preempt_check_resched();
}


>
>  Then I tried:
>  ...
>  spin_lock(&tx_d->lock);
>  sock_sendmsg(...);
>  spin_unlock(&tx_d->lock);
>  ...
>  and I get no error messages anymore but now a nice person asked me if
>  that's safe and I think it isn't because interrupts are on in work
>  handlers or not?
>

spin locks are for very quick turnaround usage.   meaning that if u
apply spin lock, and call some API (like sock_send() is, IMHO, an
example, as verified by the long chain of functions listed in the
stack trace above) that is going to take some time, that is u may end
up another CPU spinning on the spin-locks - meaning the CPU will go
into 100% utilization number.

Most important is this - never use spin locks whenever u call
something that can sleep.   Not sure if local_bh_enabled() can sleep
or not, most likely YES, because preempt_check_resched() is meant for
rescheduling, is likely to allow sleeping.   And u wrap a sleepable
function around a spin lock - that is the scenario of the classic "CPU
100% usage" bug.

I think a mutex_lock() is better.   What others think?

>  So why do I get the error message?
>  What can I use instead of spin_lock_irqsave?
>
>  Many Thanks for your answers!
>
>  Best regards
>  Lukas
>

-- 
Regards,
Peter Teoh

--
To unsubscribe from this list: send an email with
"unsubscribe kernelnewbies" to ecartis@xxxxxxxxxxxx
Please read the FAQ at http://kernelnewbies.org/FAQ


[Index of Archives]     [Newbies FAQ]     [Linux Kernel Mentors]     [Linux Kernel Development]     [IETF Annouce]     [Git]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux SCSI]     [Linux ACPI]
  Powered by Linux