On Tue, Mar 31, 2020 at 09:31:25PM -0700, Darrick J. Wong wrote: > On Tue, Mar 31, 2020 at 08:04:21PM -0700, Matthew Wilcox wrote: > > From: "Matthew Wilcox (Oracle)" <willy@xxxxxxxxxxxxx> > > > > bio_alloc() can fail when we use GFP_NORETRY. If it does, allocate > > a bio large enough for a single page like mpage_readpages() does. > > Why does mpage_readpages() do that? > > Is this a means to guarantee some kind of forward (readahead?) progress? > Forgive my ignorance, but if memory is so tight we can't allocate a bio > for readahead then why not exit having accomplished nothing? As far as I can tell, it's just a general fallback in mpage_readpages(). * If anything unusual happens, such as: * * - encountering a page which has buffers * - encountering a page which has a non-hole after a hole * - encountering a page with non-contiguous blocks * * then this code just gives up and calls the buffer_head-based read function. The actual code for that is: args->bio = mpage_alloc(bdev, blocks[0] << (blkbits - 9), min_t(int, args->nr_pages, BIO_MAX_PAGES), gfp); if (args->bio == NULL) goto confused; ... confused: if (args->bio) args->bio = mpage_bio_submit(REQ_OP_READ, op_flags, args->bio); if (!PageUptodate(page)) block_read_full_page(page, args->get_block); else unlock_page(page); As the comment implies, there are a lot of 'goto confused' cases in do_mpage_readpage(). Ideally, yes, we'd just give up on reading this page because it's only readahead, and we shouldn't stall actual work in order to reclaim memory so we can finish doing readahead. However, handling a partial page read is painful. Allocating a bio big enough for a single page is much easier on the mm than allocating a larger bio (for a start, it's a single allocation, not a pair of allocations), so this is a reasonable compromise between simplicity of code and quality of implementation.