On 2020-06-11, Petr Mladek <pmladek@xxxxxxxx> wrote: > All this relies on the fact the the full barrier is called in > data_push_tail() and data_push_tail() is called right above. > But there are two situations where the barrier is not called. > It is when: > > 1. desc.text_blk_lpos.next already is behind data_ring->tail_lpos. > > This is safe. Yes, and I have since expanded the comment above the data_push_tail() while loop to mention that: /* * Loop until the tail lpos is at or beyond @lpos. This condition * may already be satisfied, resulting in no full memory barrier * from data_push_tail:C being performed. However, since this CPU * sees the new tail lpos, any descriptor states that transitioned to * the reusable state must already be visible. */ > 2. desc.text_blk_lpos == INVALID_LPOS. > > It seems that this is not synchronized and other CPUs might see > the old state. Great catch! This could trigger the WARN_ON at desc_reserve:F and lead to prb_reserve() unnecessarily failing. > The question is what to do with the empty data case. I see three > possibilities: > > 1. Ignore the case with empty block because it (probably) does not > cause real problems. It could cause dropped messages. > 2. Call the full barrier in data_push_tail() even when the data > block is empty. This is the common case, since most records will not have dictionary data. > 3. Call the full barrier also in desc_push_tail() as I suggested. > > My opinion: > > I prefer 3rd solution. Agreed. For my next version I am folding all smp_mb() and smp_wmb() calls into their neighboring cmpxchg_relaxed(). This would apply here as well, changing desc_push_tail:B to a full cmpxchg(). We still need the full memory barrier at data_push_tail:C. Readers rely on it to be able to verify if their copied data is still valid: CPU0 (writer0) CPU1 (writer1) CPU2 (reader) desc_read->committed desc_make_reusable smp_mb data_push_tail read new data tail data_push_head smp_mb write new data read garbage new data desc_read->reusable John Ogness _______________________________________________ kexec mailing list kexec@xxxxxxxxxxxxxxxxxxx http://lists.infradead.org/mailman/listinfo/kexec