On 2023/06/23 19:11, Sebastian Andrzej Siewior wrote: > | unsigned __seqprop_spinlock_sequence(const seqcount_spinlock_t *s) > | { > | unsigned seq = READ_ONCE(s->seqcount.sequence); > | > | if (unlikely(seq & 1)) { > | spin_lock(s->lock); > | spin_unlock(s->lock); > | seq = READ_ONCE(s->seqcount.sequence); > | } > | return seq; > | } OK. I understood that read_seqbegin() implies spin_lock()/spin_unlock() if RT. What a deep macro. Thank you for explanation. Well, /* * Zonelists may change due to hotplug during allocation. Detect when zonelists * have been rebuilt so allocation retries. Reader side does not lock and * retries the allocation if zonelist changes. Writer side is protected by the * embedded spin_lock. */ is not accurate. Something like below? If !RT, reader side does not lock and retries the allocation if zonelist changes. If RT, reader side grabs and releases the embedded spin_lock in order to wait for zonelist change operations to complete. Hmm, I feel worried that kmalloc(GFP_ATOMIC) from hard IRQ context might sleep if RT...