Re: [PATCH] mm/page_alloc: Use write_seqlock_irqsave() instead write_seqlock() + local_irq_save().

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2023/06/23 19:53, Petr Mladek wrote:
> BTW: I see a possible race in the below code.
> 
>> | unsigned __seqprop_spinlock_sequence(const seqcount_spinlock_t *s)
>> | { 
>> | 	unsigned seq = READ_ONCE(s->seqcount.sequence);
>> | 
>> | 	if (unlikely(seq & 1)) {
>> | 		spin_lock(s->lock); 
>> | 		spin_unlock(s->lock);
> 
> This guarantees the reader waits for the writer. But does this
> guarantee that another writer could not bump the sequence
> before we read it below?
> 
>> | 		seq = READ_ONCE(s->seqcount.sequence);
> 
> IMHO, it should be:
> 
> 	if (unlikely(seq & 1)) {
> 		spin_lock(s->lock); 
> 		seq = READ_ONCE(s->seqcount.sequence);
> 		spin_unlock(s->lock);
> 
> IMHO, only this would guarantee that returned seq couldn't be odd.

What is wrong with bumping the sequence again?
If the returned seq is odd, the caller just retries as needed.
For example, __read_seqcount_begin() will retry at

	while ((__seq = seqprop_sequence(s)) & 1)
		cpu_relax();

.

Lack of data_race() annotation resulting in KCSAN messages might be
an unexpected source of printk(). :-)





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux