On Tue, Aug 09, 2022 at 02:32:13PM -0400, Mikulas Patocka wrote: > From: Mikulas Patocka <mpatocka@xxxxxxxxxx> > > Let's have a look at this piece of code in __bread_slow: > get_bh(bh); > bh->b_end_io = end_buffer_read_sync; > submit_bh(REQ_OP_READ, 0, bh); > wait_on_buffer(bh); > if (buffer_uptodate(bh)) > return bh; > Neither wait_on_buffer nor buffer_uptodate contain any memory barrier. > Consequently, if someone calls sb_bread and then reads the buffer data, > the read of buffer data may be executed before wait_on_buffer(bh) on > architectures with weak memory ordering and it may return invalid data. > > Fix this bug by adding a memory barrier to set_buffer_uptodate and an > acquire barrier to buffer_uptodate (in a similar way as > folio_test_uptodate and folio_mark_uptodate). > > Signed-off-by: Mikulas Patocka <mpatocka@xxxxxxxxxx> Reviewed-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> > Cc: stable@xxxxxxxxxxxxxxx > > Index: linux-2.6/include/linux/buffer_head.h > =================================================================== > --- linux-2.6.orig/include/linux/buffer_head.h > +++ linux-2.6/include/linux/buffer_head.h > @@ -117,7 +117,6 @@ static __always_inline int test_clear_bu > * of the form "mark_buffer_foo()". These are higher-level functions which > * do something in addition to setting a b_state bit. > */ > -BUFFER_FNS(Uptodate, uptodate) > BUFFER_FNS(Dirty, dirty) > TAS_BUFFER_FNS(Dirty, dirty) > BUFFER_FNS(Lock, locked) > @@ -135,6 +134,30 @@ BUFFER_FNS(Meta, meta) > BUFFER_FNS(Prio, prio) > BUFFER_FNS(Defer_Completion, defer_completion) > > +static __always_inline void set_buffer_uptodate(struct buffer_head *bh) > +{ > + /* > + * make it consistent with folio_mark_uptodate > + * pairs with smp_load_acquire in buffer_uptodate > + */ > + smp_mb__before_atomic(); > + set_bit(BH_Uptodate, &bh->b_state); > +} > + > +static __always_inline void clear_buffer_uptodate(struct buffer_head *bh) > +{ > + clear_bit(BH_Uptodate, &bh->b_state); > +} > + > +static __always_inline int buffer_uptodate(const struct buffer_head *bh) > +{ > + /* > + * make it consistent with folio_test_uptodate > + * pairs with smp_mb__before_atomic in set_buffer_uptodate > + */ > + return (smp_load_acquire(&bh->b_state) & (1UL << BH_Uptodate)) != 0; > +} > + > #define bh_offset(bh) ((unsigned long)(bh)->b_data & ~PAGE_MASK) > > /* If we *know* page->private refers to buffer_heads */ >