On Fri, Apr 26, 2024 at 11:25:25PM +0530, Ritesh Harjani wrote: > Matthew Wilcox <willy@xxxxxxxxxxxxx> writes: > > The approach I suggested was to initialise read_bytes_pending to > > folio_size() at the start. Then subtract off blocksize for each > > uptodate block, whether you find it already uptodate, or as the > > completion handler runs. > > > > Is there a reason that doesn't work? > > That is what this patch series does right. The current patch does work > as far as my testing goes. > > For e.g. This is what initializes the r_b_p for the first time when > ifs->r_b_p is 0. > > + loff_t to_read = min_t(loff_t, iter->len - offset, > + folio_size(folio) - offset_in_folio(folio, orig_pos)); > <..> > + if (!ifs->read_bytes_pending) > + ifs->read_bytes_pending = to_read; > > > Then this is where we subtract r_b_p for blocks which are uptodate. > + padjust = pos - orig_pos; > + ifs->read_bytes_pending -= padjust; > > > This is when we adjust r_b_p when we directly zero the folio. > if (iomap_block_needs_zeroing(iter, pos)) { > + if (ifs) { > + spin_lock_irq(&ifs->state_lock); > + ifs->read_bytes_pending -= plen; > + if (!ifs->read_bytes_pending) > + rbp_finished = true; > + spin_unlock_irq(&ifs->state_lock); > + } > > But as you see this requires surgery throughout read paths. What if > we add a state flag to ifs only for BH_BOUNDARY. Maybe that could > result in a more simplified approach? > Because all we require is to know whether the folio should be unlocked > or not at the time of completion. > > Do you think we should try that part or you think the current approach > looks ok? You've really made life hard for yourself. I had something more like this in mind. I may have missed a few places that need to be changed, but this should update read_bytes_pending everwhere that we set bits in the uptodate bitmap, so it should be right? diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 41c8f0c68ef5..f87ca8ee4d19 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -79,6 +79,7 @@ static void iomap_set_range_uptodate(struct folio *folio, size_t off, if (ifs) { spin_lock_irqsave(&ifs->state_lock, flags); uptodate = ifs_set_range_uptodate(folio, ifs, off, len); + ifs->read_bytes_pending -= len; spin_unlock_irqrestore(&ifs->state_lock, flags); } @@ -208,6 +209,8 @@ static struct iomap_folio_state *ifs_alloc(struct inode *inode, spin_lock_init(&ifs->state_lock); if (folio_test_uptodate(folio)) bitmap_set(ifs->state, 0, nr_blocks); + else + ifs->read_bytes_pending = folio_size(folio); if (folio_test_dirty(folio)) bitmap_set(ifs->state, nr_blocks, nr_blocks); folio_attach_private(folio, ifs); @@ -396,12 +399,6 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter, } ctx->cur_folio_in_bio = true; - if (ifs) { - spin_lock_irq(&ifs->state_lock); - ifs->read_bytes_pending += plen; - spin_unlock_irq(&ifs->state_lock); - } - sector = iomap_sector(iomap, pos); if (!ctx->bio || bio_end_sector(ctx->bio) != sector ||