On Fri, 5 Aug 2022, Matthew Wilcox wrote: > On Mon, Aug 01, 2022 at 11:01:40AM -0400, Mikulas Patocka wrote: > > > > In most cases, the buffer is set uptodate while it is locked, so that > > there is no race on the uptodate flag (the race exists on the locked > > flag). Are there any cases where the uptodate flag is modified on unlocked > > buffer, so that it needs special treatment too? > > I think you misunderstand the purpose of locked/uptodate. At least > for pages, the lock flag does not order access to the data in the page. > Indeed, the contents of the page can be changed while you hold the lock. > But the uptodate flag does order access to the data. At the point where > you can observe the uptodate flag set, you know the contents of the page > have been completely read from storage. And you don't need to hold the > lock to check the uptodate flag. So this is wrong: > > buffer_lock() > *data = 0x12345678; > buffer_set_uptodate_not_ordered() > buffer_unlock_ordered() > > because a reader can do: > > while (!buffer_test_uptodate()) { > buffer_lock(); > buffer_unlock(); > } > x = *data; > > and get x != 0x12345678 because the compiler can move the > buffer_set_uptodate_not_ordered() before the store to *data. Thanks for explanation. Would you like this patch? From: Mikulas Patocka <mpatocks@xxxxxxxxxx> Let's have a look at this piece of code in __bread_slow: get_bh(bh); bh->b_end_io = end_buffer_read_sync; submit_bh(REQ_OP_READ, 0, bh); wait_on_buffer(bh); if (buffer_uptodate(bh)) return bh; Neither wait_on_buffer nor buffer_uptodate contain any memory barrier. Consequently, if someone calls sb_bread and then reads the buffer data, the read of buffer data may be executed before wait_on_buffer(bh) on architectures with weak memory ordering and it may return invalid data. Fix this bug by adding a write memory barrier to set_buffer_uptodate and a read memory barrier to buffer_uptodate (in the same way as folio_test_uptodate and folio_mark_uptodate). We also add a barrier to buffer_locked - it pairs with a barrier in unlock_buffer. Signed-off-by: Mikulas Patocka <mpatocka@xxxxxxxxxx> Cc: stable@xxxxxxxxxxxxxxx Index: linux-2.6/include/linux/buffer_head.h =================================================================== --- linux-2.6.orig/include/linux/buffer_head.h +++ linux-2.6/include/linux/buffer_head.h @@ -117,10 +117,8 @@ static __always_inline int test_clear_bu * of the form "mark_buffer_foo()". These are higher-level functions which * do something in addition to setting a b_state bit. */ -BUFFER_FNS(Uptodate, uptodate) BUFFER_FNS(Dirty, dirty) TAS_BUFFER_FNS(Dirty, dirty) -BUFFER_FNS(Lock, locked) BUFFER_FNS(Req, req) TAS_BUFFER_FNS(Req, req) BUFFER_FNS(Mapped, mapped) @@ -135,6 +133,49 @@ BUFFER_FNS(Meta, meta) BUFFER_FNS(Prio, prio) BUFFER_FNS(Defer_Completion, defer_completion) +static __always_inline void set_buffer_uptodate(struct buffer_head *bh) +{ + /* + * make it consistent with folio_mark_uptodate + * pairs with smp_acquire__after_ctrl_dep in buffer_uptodate + */ + smp_wmb(); + set_bit(BH_Uptodate, &bh->b_state); +} + +static __always_inline void clear_buffer_uptodate(struct buffer_head *bh) +{ + clear_bit(BH_Uptodate, &bh->b_state); +} + +static __always_inline int buffer_uptodate(const struct buffer_head *bh) +{ + bool ret = test_bit(BH_Uptodate, &bh->b_state); + /* + * make it consistent with folio_test_uptodate + * pairs with smp_wmb in set_buffer_uptodate + */ + if (ret) + smp_acquire__after_ctrl_dep(); + return ret; +} + +static __always_inline void set_buffer_locked(struct buffer_head *bh) +{ + set_bit(BH_Lock, &bh->b_state); +} + +static __always_inline int buffer_locked(const struct buffer_head *bh) +{ + bool ret = test_bit(BH_Lock, &bh->b_state); + /* + * pairs with smp_mb__after_atomic in unlock_buffer + */ + if (!ret) + smp_acquire__after_ctrl_dep(); + return ret; +} + #define bh_offset(bh) ((unsigned long)(bh)->b_data & ~PAGE_MASK) /* If we *know* page->private refers to buffer_heads */