In the upcoming THP patch series, if we find a !Uptodate page, it is because of a read error. In this case, we want to split the THP into smaller pages so we can handle the error in as granular a fashion as possible. But xfstests generic/273 defeats this strategy by having 500 threads all sleeping on the same page, each waiting for their turn to split the page. None of them will ever succeed because splitting a page requires that you hold the only reference to it. To fix this, use put_and_wait_on_page_locked() to sleep without holding a reference to the page. Each of the readers will then go back and retry the page lookup after the page is unlocked. Since we now get the page lock a little earlier in generic_file_buffered_read(), we can eliminate a number of duplicate checks. The original intent (commit ebded02788b5 ("avoid unnecessary calls to lock_page when waiting for IO to complete during a read") behind getting the page lock later was to avoid re-locking the page after it has been brought uptodate by another thread. We will still avoid that because we go through the normal lookup path again after the winning thread has brought the page uptodate. Using the "fail 10% of readaheads" patch, which will induce the !Uptodate case, I can detect no significant difference by applying this patch with the generic/273 test. Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> --- mm/filemap.c | 69 +++++++++++++++------------------------------------- 1 file changed, 20 insertions(+), 49 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index f70227941627..ac2dfa857568 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1254,14 +1254,6 @@ static int __wait_on_page_locked_async(struct page *page, return ret; } -static int wait_on_page_locked_async(struct page *page, - struct wait_page_queue *wait) -{ - if (!PageLocked(page)) - return 0; - return __wait_on_page_locked_async(compound_head(page), wait, false); -} - /** * put_and_wait_on_page_locked - Drop a reference and wait for it to be unlocked * @page: The page to wait for. @@ -2128,34 +2120,37 @@ ssize_t generic_file_buffered_read(struct kiocb *iocb, put_page(page); goto out; } - error = wait_on_page_locked_async(page, - iocb->ki_waitq); + error = lock_page_async(page, iocb->ki_waitq); + if (error) + goto readpage_error; + } else if (iocb->ki_flags & IOCB_NOWAIT) { + put_page(page); + goto would_block; } else { - if (iocb->ki_flags & IOCB_NOWAIT) { - put_page(page); - goto would_block; + if (!trylock_page(page)) { + put_and_wait_on_page_locked(page, + TASK_KILLABLE); + continue; } - error = wait_on_page_locked_killable(page); } - if (unlikely(error)) - goto readpage_error; if (PageUptodate(page)) - goto page_ok; + goto uptodate; + if (!page->mapping) { + unlock_page(page); + put_page(page); + continue; + } if (inode->i_blkbits == PAGE_SHIFT || !mapping->a_ops->is_partially_uptodate) - goto page_not_up_to_date; + goto readpage; /* pipes can't handle partially uptodate pages */ if (unlikely(iov_iter_is_pipe(iter))) - goto page_not_up_to_date; - if (!trylock_page(page)) - goto page_not_up_to_date; - /* Did it get truncated before we got the lock? */ - if (!page->mapping) - goto page_not_up_to_date_locked; + goto readpage; if (!mapping->a_ops->is_partially_uptodate(page, offset, iter->count)) - goto page_not_up_to_date_locked; + goto readpage; +uptodate: unlock_page(page); } page_ok: @@ -2221,30 +2216,6 @@ ssize_t generic_file_buffered_read(struct kiocb *iocb, goto out; } continue; - -page_not_up_to_date: - /* Get exclusive access to the page ... */ - if (iocb->ki_flags & IOCB_WAITQ) - error = lock_page_async(page, iocb->ki_waitq); - else - error = lock_page_killable(page); - if (unlikely(error)) - goto readpage_error; - -page_not_up_to_date_locked: - /* Did it get truncated before we got the lock? */ - if (!page->mapping) { - unlock_page(page); - put_page(page); - continue; - } - - /* Did somebody else fill it already? */ - if (PageUptodate(page)) { - unlock_page(page); - goto page_ok; - } - readpage: if (iocb->ki_flags & (IOCB_NOIO | IOCB_NOWAIT)) { unlock_page(page); -- 2.28.0