Re: Spinlocks and interrupts

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 11/10/2011 11:58 PM, Rajat Sharma wrote:
> For most of the block drivers bio_endio runs in a context of its
> tasklet, so it is indeed atomic context.
>
> -Rajat
>
> On Fri, Nov 11, 2011 at 4:50 AM, Kai Meyer<kai@xxxxxxxxxx>  wrote:
>>
>> On 11/10/2011 04:00 PM, Jeff Haran wrote:
>>>> -----Original Message-----
>>>> From: kernelnewbies-bounces@xxxxxxxxxxxxxxxxx [mailto:kernelnewbies-
>>>> bounces@xxxxxxxxxxxxxxxxx] On Behalf Of Kai Meyer
>>>> Sent: Thursday, November 10, 2011 1:55 PM
>>>> To: kernelnewbies@xxxxxxxxxxxxxxxxx
>>>> Subject: Re: Spinlocks and interrupts
>>>>
>>>> Alright, to summarize, for my benefit mostly,
>>>>
>>>> I'm writing a block device driver, which has 2 entry points into my
>>> code
>>>> that will reach this critical section. It's either the make request
>>>> function for the block device, or the resulting bio->bi_end_io
>>> function.
>>>> I do some waiting with msleep() (for now) from the make request
>>> function
>>>> entry point, so I'm confident that entry point is not in an atomic
>>>> context. I also only end up requesting the critical section to call
>>>> kmalloc from this context, which is why I never ran into the
>>> scheduling
>>>> while atomic issue before.
>>>>
>>>> I'm fairly certain the critical section executes in thread context not
>>>> interrupt context from either entry point.
>>>>
>>>> I'm certain that the spinlock_t is only ever used in one function (a I
>>>> posted a simplified version of the critical section earlier).
>>>>
>>>> It seems that the critical section is often called in an atomic
>>> context.
>>>> The spin_lock function sounds like it will only cause a second call to
>>>> spin_lock to spin if it is called on a separate core.
>>>>
>>>> But, since I'm certain the critical section is never called from
>>>> interrupt context, only thread context, the fact that pre-emption is
>>>> disabled on the core should provide the protection I need with out
>>>> having to disable IRQs. Disabling IRQs would prevent an interrupt from
>>>> occurring while the lock is acquired. I would like to avoid disabling
>>>> interrupts if I don't need to.
>>>>
>>>> So it sounds like spin_lock/spin_unlock is the correct choice?
>>>>
>>>> In addition, I'd like to be more confident in my assumptions above.
>>> Can
>>>> I test for atomic context? For instance, I know that you can call
>>>> irqs_disabled(), is there a similar is_atomic() function I can call? I
>>>> would like to put a few calls in different places to learn what sort
>>> of
>>>> context I'm.
>>>>
>>>> -Kai Meyer
>>>>
>>>> On 11/10/2011 12:19 PM, Jeff Haran wrote:
>>>>>> -----Original Message-----
>>>>>> From: kernelnewbies-
>>>> bounces+jharan=bytemobile.com@xxxxxxxxxxxxxxxxx
>>>>>> [mailto:kernelnewbies-
>>>>>> bounces+jharan=bytemobile.com@xxxxxxxxxxxxxxxxx] On Behalf Of
>>>> Dave
>>>>>> Hylands
>>>>>> Sent: Thursday, November 10, 2011 11:07 AM
>>>>>> To: Kai Meyer
>>>>>> Cc: kernelnewbies@xxxxxxxxxxxxxxxxx
>>>>>> Subject: Re: Spinlocks and interrupts
>>>>>>
>>>>>> Hi Kai,
>>>>>>
>>>>>> On Thu, Nov 10, 2011 at 10:14 AM, Kai Meyer<kai@xxxxxxxxxx>     wrote:
>>>>>>> I think I get it. I'm hitting the scheduling while atomic because
>>>>> I'm
>>>>>>> calling my function from a struct bio's endio function, which is
>>>>>>> probably running with a lock held somewhere else, and then my
>>> mutex
>>>>>>> sleeps, while the spin_lock functions do not sleep.
>>>>>> Actually, just holding a lock doesn't create an atomic context.
>>>>> I believe on kernels with kernel pre-emption enabled the act of
>>> taking
>>>>> the lock disables pre-emption. If it didn't work this way you could
>>> end
>>>>> up taking the lock in one process context and while the lock was
>>> held
>>>>> get pre-empted. Then another process tries to take the lock and you
>>> dead
>>>>> lock.
>>>>>
>>>>> Jeff Haran
>>>>>
>>> Kai, you might want to try bottom posting. It is the standard on these
>>> lists. It makes it easier for others to follow the thread.
>>>
>>> I know of no kernel call that you can make to test for current execution
>>> context. There are the in_irq(), in_interrupt() and in_softirq() macros
>>> in hardirq.h, but when I've looked at the code that implements them I've
>>> come to the conclusion that they sometimes will lie. in_softirq()
>>> returns non-zero if you are in a software IRQ. Fair enough. But based on
>>> my reading in the past it's looked to me like it will also return
>>> non-zero if you've disabled bottom halves from process context with say
>>> a call to spin_lock_bh().
>>>
>>> It would be nice if there were some way of asking the kernel what
>>> context you are in, for debugging if for no other reason, but if it's
>>> there I haven't found it.
>>>
>>> I'd love to be proven wrong here, BTW. If others know better, please
>>> enlighten me.
>>>
>>> Jeff Haran
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> Kernelnewbies mailing list
>>> Kernelnewbies@xxxxxxxxxxxxxxxxx
>>> http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
>> I try to remember to bottom post on message lists, but obviously I've
>> been negligent :)
>>
>> Perhaps I'll just add some calls to msleep() at various places to help
>> me identify when portions of my code are in an atomic context, just to
>> help me learn what's going on.
>>
>> -Kai Meyer
>>
>> _______________________________________________
>> Kernelnewbies mailing list
>> Kernelnewbies@xxxxxxxxxxxxxxxxx
>> http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
>>
> _______________________________________________
> Kernelnewbies mailing list
> Kernelnewbies@xxxxxxxxxxxxxxxxx
> http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies

Thanks so much for this Rajat. It helps a lot.

_______________________________________________
Kernelnewbies mailing list
Kernelnewbies@xxxxxxxxxxxxxxxxx
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


[Index of Archives]     [Newbies FAQ]     [Linux Kernel Mentors]     [Linux Kernel Development]     [IETF Annouce]     [Git]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux SCSI]     [Linux ACPI]
  Powered by Linux