Re: spin_lock_irqsave or spin_lock in work queue handlers?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello!


My first idea was to use spin_lock_irqsave in the work handler:
...
spin_lock_irqsave(&tx_d->lock, flags);
sock_sendmsg(...);
spin_unlock_irqrestore(&tx_d->lock, flags);
...
but if I do that, I get this message from the kernel:
BUG: warning at kernel/softirq.c:137/local_bh_enable()
 [<b0121808>] local_bh_enable+0x38/0x79
...

From the stack trace, the warning are generated by the
"WARN_ON_ONCE(irqs_disabled());" line below.   This is because u have
disabled IRQ.  But this function can be used both with and without it
being enabled.   If

void local_bh_enable(void)
{
#ifdef S
        unsigned long flags;

        WARN_ON_ONCE(in_irq());
#endif
        WARN_ON_ONCE(irqs_disabled());

#ifdef CONFIG_TRACE_IRQFLAGS
        local_irq_save(flags);
#endif
        /*
         * Are softirqs going to be turned on now:
         */
        if (softirq_count() == SOFTIRQ_OFFSET)
                trace_softirqs_on((unsigned long)__builtin_return_address(0));
        /*
         * Keep preemption disabled until we are done with
         * softirq processing:
         */
        sub_preempt_count(SOFTIRQ_OFFSET - 1);

        if (unlikely(!in_interrupt() && local_softirq_pending()))
                do_softirq();

        dec_preempt_count();
#ifdef CONFIG_TRACE_IRQFLAGS
        local_irq_restore(flags);
#endif
        preempt_check_resched();
}


So if I would define 'CONFIG_TRACE_IRQFLAGS' then it would be O.K. to use spin_lock_irqsave? (Aside from the fact that spin_locks use 100% of CPU...)


spin locks are for very quick turnaround usage.   meaning that if u
apply spin lock, and call some API (like sock_send() is, IMHO, an
example, as verified by the long chain of functions listed in the
stack trace above) that is going to take some time, that is u may end
up another CPU spinning on the spin-locks - meaning the CPU will go
into 100% utilization number.

I know this fact but I thought:
The socket_sendmsg call is "always" _very_ fast because the data size is always small and I wanted to get the possible _best_ PingPong times...
So I decided to you them as long as the driver is in developing state.


Most important is this - never use spin locks whenever u call
something that can sleep.   Not sure if local_bh_enabled() can sleep
or not, most likely YES, because preempt_check_resched() is meant for
rescheduling, is likely to allow sleeping.   And u wrap a sleepable
function around a spin lock - that is the scenario of the classic "CPU
100% usage" bug.

I swear, I won't do that anymore! :-)


I think a mutex_lock() is better.   What others think?

I use semaphores now (down() and up()). And the interface doesn't seem to be slower. Is that also O.K.? I don't know mutex_lock. What's the difference between mutex_locks and semaphores? Is it better to use mutex_locks here?


Many Thanks for your hints!

Regards,
Lukas

--
To unsubscribe from this list: send an email with
"unsubscribe kernelnewbies" to ecartis@xxxxxxxxxxxx
Please read the FAQ at http://kernelnewbies.org/FAQ


[Index of Archives]     [Newbies FAQ]     [Linux Kernel Mentors]     [Linux Kernel Development]     [IETF Annouce]     [Git]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux SCSI]     [Linux ACPI]
  Powered by Linux