Re: spin_lock_irqsave or spin_lock in work queue handlers?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sorry, I missed the question onf the CONFIG_TRACE_IRQFLAGS - I think that config option is meant for tracing the IRQ flow in the event of debugging.  Correct?

Lukas Razik wrote:
Hello!


My first idea was to use spin_lock_irqsave in the work handler:
...
spin_lock_irqsave(&tx_d->lock, flags);
sock_sendmsg(...);
spin_unlock_irqrestore(&tx_d->lock, flags);
...
but if I do that, I get this message from the kernel:
BUG: warning at kernel/softirq.c:137/local_bh_enable()
 [<b0121808>] local_bh_enable+0x38/0x79
...

>From the stack trace, the warning are generated by the
"WARN_ON_ONCE(irqs_disabled());" line below.   This is because u have
disabled IRQ.  But this function can be used both with and without it
being enabled.   If

void local_bh_enable(void)
{
#ifdef S
        unsigned long flags;

        WARN_ON_ONCE(in_irq());
#endif
        WARN_ON_ONCE(irqs_disabled());

#ifdef CONFIG_TRACE_IRQFLAGS
        local_irq_save(flags);
#endif
        /*
         * Are softirqs going to be turned on now:
         */
        if (softirq_count() == SOFTIRQ_OFFSET)
                trace_softirqs_on((unsigned long)__builtin_return_address(0));
        /*
         * Keep preemption disabled until we are done with
         * softirq processing:
         */
        sub_preempt_count(SOFTIRQ_OFFSET - 1);

        if (unlikely(!in_interrupt() && local_softirq_pending()))
                do_softirq();

        dec_preempt_count();
#ifdef CONFIG_TRACE_IRQFLAGS
        local_irq_restore(flags);
#endif
        preempt_check_resched();
}


So if I would define 'CONFIG_TRACE_IRQFLAGS' then it would be O.K. to use spin_lock_irqsave? (Aside from the fact that spin_locks use 100% of CPU...)


spin locks are for very quick turnaround usage.   meaning that if u
apply spin lock, and call some API (like sock_send() is, IMHO, an
example, as verified by the long chain of functions listed in the
stack trace above) that is going to take some time, that is u may end
up another CPU spinning on the spin-locks - meaning the CPU will go
into 100% utilization number.

I know this fact but I thought:
The socket_sendmsg call is "always" _very_ fast because the data size is always small and I wanted to get the possible _best_ PingPong times...
So I decided to you them as long as the driver is in developing state.


Most important is this - never use spin locks whenever u call
something that can sleep.   Not sure if local_bh_enabled() can sleep
or not, most likely YES, because preempt_check_resched() is meant for
rescheduling, is likely to allow sleeping.   And u wrap a sleepable
function around a spin lock - that is the scenario of the classic "CPU
100% usage" bug.

I swear, I won't do that anymore! :-)
Thank you :-) .
I think a mutex_lock() is better.   What others think?

I use semaphores now (down() and up()). And the interface doesn't seem to be slower. Is that also O.K.?
I don't know mutex_lock. What's the difference between mutex_locks and semaphores? Is it better to use mutex_locks here?


2 diff things.   semap are sync mechanism used - to sync among CPUs/processes/tasks etc.   Just a counting mechanism.   It DOES NOT lock the other CPU while another CPU is using the same resources.    but this counting must be atomic....different arch has different instruction set to do it.   sometime the kernel guarantee the atomicity by putting a spin lock around it.   Take a look at the fs/super.c code - mutex is often used TOGETHER with the semaphore.

static int grab_super(struct super_block *s) __releases(sb_lock)
{
       s->s_count++;
       spin_unlock(&sb_lock);
       down_write(&s->s_umount);
       if (s->s_root) {

Or this:

void unlock_super(struct super_block * sb)
{
       put_fs_excl();
       mutex_unlock(&sb->s_lock);
}

(in general all the get_xxx and put_xxx functions u see in the source code are semaphore - sorry if this generalization is wrong :-)).   They goes in pair to ensure balance in counting.   but u used semaphore ONLY WHEN u want to have multiple user sharing the same resources.   eg, one data, many reader, but writer u can only have one - so semaphore cannot be used here - either spin lock or mutex solely.

have u heard of lockfree/waitfree algorithm (check wiki)?   or non-blocking synchronization?  (linux call it RCU - check wiki, and under Documentation/RCU directory in source).   It is a large topic itself....i cannot justify to describe it here.....type "lockfree" in google video and u can listen to a seminar on this algorithm.....wow...google video has lots of open source seminar btw.

Many Thanks for your hints!

Regards,
Lukas





-- To unsubscribe from this list: send an email with "unsubscribe kernelnewbies" to ecartis@xxxxxxxxxxxx Please read the FAQ at http://kernelnewbies.org/FAQ

[Index of Archives]     [Newbies FAQ]     [Linux Kernel Mentors]     [Linux Kernel Development]     [IETF Annouce]     [Git]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux SCSI]     [Linux ACPI]
  Powered by Linux