Re: [PATCH 0/5] fs/buffer: strack reduction on async read

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jan 31, 2025 at 08:54:31AM -0800, Luis Chamberlain wrote:
> On Thu, Dec 19, 2024 at 03:51:34AM +0000, Matthew Wilcox wrote:
> > On Wed, Dec 18, 2024 at 06:27:36PM -0800, Luis Chamberlain wrote:
> > > On Wed, Dec 18, 2024 at 08:05:29PM +0000, Matthew Wilcox wrote:
> > > > On Tue, Dec 17, 2024 at 06:26:21PM -0800, Luis Chamberlain wrote:
> > > > > This splits up a minor enhancement from the bs > ps device support
> > > > > series into its own series for better review / focus / testing.
> > > > > This series just addresses the reducing the array size used and cleaning
> > > > > up the async read to be easier to read and maintain.
> > > > 
> > > > How about this approach instead -- get rid of the batch entirely?
> > > 
> > > Less is more! I wish it worked, but we end up with a null pointer on
> > > ext4/032 (and indeed this is the test that helped me find most bugs in
> > > what I was working on):
> > 
> > Yeah, I did no testing; just wanted to give people a different approach
> > to consider.
> > 
> > > [  106.034851] BUG: kernel NULL pointer dereference, address: 0000000000000000
> > > [  106.046300] RIP: 0010:end_buffer_async_read_io+0x11/0x90
> > > [  106.047819] Code: f2 ff 0f 1f 80 00 00 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 0f 1f 44 00 00 53 48 8b 47 10 48 89 fb 48 8b 40 18 <48> 8b 00 f6 40 0d 40 74 0d 0f b7 00 66 25 00 f0 66 3d 00 80 74 09
> > 
> > That decodes as:
> > 
> >    5:	53                   	push   %rbx
> >    6:	48 8b 47 10          	mov    0x10(%rdi),%rax
> >    a:	48 89 fb             	mov    %rdi,%rbx
> >    d:	48 8b 40 18          	mov    0x18(%rax),%rax
> >   11:*	48 8b 00             	mov    (%rax),%rax		<-- trapping instruction
> >   14:	f6 40 0d 40          	testb  $0x40,0xd(%rax)
> > 
> > 6: bh->b_folio
> > d: b_folio->mapping
> > 11: mapping->host
> > 
> > So folio->mapping is NULL.
> > 
> > Ah, I see the problem.  end_buffer_async_read() uses the buffer_async_read
> > test to decide if all buffers on the page are uptodate or not.  So both
> > having no batch (ie this patch) and having a batch which is smaller than
> > the number of buffers in the folio can lead to folio_end_read() being
> > called prematurely (ie we'll unlock the folio before finishing reading
> > every buffer in the folio).
> 
> But:
> 
> a) all batched buffers are locked in the old code, we only unlock
>    the currently evaluated buffer, the buffers from our pivot are locked
>    and should also have the async flag set. That fact that buffers ahead
>    should have the async flag set should prevent from calling
>    folio_end_read() prematurely as I read the code, no?

I'm sure you know what you mean by "the old code", but I don't.

If you mean "the code in 6.13", here's what it does:

        tmp = bh;
        do {
                if (!buffer_uptodate(tmp))
                        folio_uptodate = 0;
                if (buffer_async_read(tmp)) {
                        BUG_ON(!buffer_locked(tmp));
                        goto still_busy;
                }
                tmp = tmp->b_this_page;
        } while (tmp != bh);
        folio_end_read(folio, folio_uptodate);

so it's going to cycle around every buffer on the page, and if it finds
none which are marked async_read, it'll call folio_end_read().
That's fine in 6.13 because in stage 2, all buffers which are part of
this folio are marked as async_read.

In your patch, you mark every buffer _in the batch_ as async_read
and then submit the entire batch.  So if they all complete before you
mark the next bh as being uptodate, it'll think the read is complete
and call folio_end_read().





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux