Hi Andrea, On 2019-12-21, Andrea Parri <parri.andrea@xxxxxxxxx> wrote: >> + *desc_out = READ_ONCE(*desc); >> + >> + /* Load data before re-checking state. */ >> + smp_rmb(); /* matches LMM_REF(desc_reserve:A) */ > > I looked for a matching WRITE_ONCE() or some other type of marked write, > but I could not find it. What is the rationale? Or what did I miss? This smp_rmb() matches LMM_TAG(desc_reserve:A). >> + do { >> + next_lpos = get_next_lpos(data_ring, begin_lpos, size); >> + >> + if (!data_push_tail(rb, data_ring, >> + next_lpos - DATA_SIZE(data_ring))) { >> + /* Failed to allocate, specify a data-less block. */ >> + blk_lpos->begin = INVALID_LPOS; >> + blk_lpos->next = INVALID_LPOS; >> + return NULL; >> + } >> + } while (!atomic_long_try_cmpxchg(&data_ring->head_lpos, &begin_lpos, >> + next_lpos)); >> + >> + /* >> + * No barrier is needed here. The data validity is defined by >> + * the state of the associated descriptor. They are marked as >> + * invalid at the moment. And only the winner of the above >> + * cmpxchg() could write here. >> + */ > > The (successful) CMPXCHG provides a full barrier. This comment suggests > that that could be somehow relaxed? Or the comment could be improved? You are correct. There is no need for the full barrier here. This code is based on Petr's POC. I focussed on making sure needed barriers are in place, but did not try to eliminate excessive barriers. > (The patch introduces a number of CMPXCHG: similar questions would > apply to those other instances...) LMM_TAG(data_push_tail:A) is the only CMPXCHG that requires its full barrier. All others can be relaxed. I will make this change for the next series. Thanks for taking time for this. John Ogness _______________________________________________ kexec mailing list kexec@xxxxxxxxxxxxxxxxxxx http://lists.infradead.org/mailman/listinfo/kexec