On Sun, Jan 19, 2020 at 10:51:37AM +0800, yukuai (C) wrote: > At first, if you try to add all pages to pagecache and lock them before > iomap_begin. I thought aboult it before, but I throw away the idea > becacuse all other operation that will lock the page will need to wait > for readahead to finish. And it might cause problem for performance > overhead. And if you try to add each page to page cache and call iomap > before adding the next page. Then, we are facing the same CPU overhead > issure. I don't understand your reasoning here. If another process wants to access a page of the file which isn't currently in cache, it would have to first read the page in from storage. If it's under readahead, it has to wait for the read to finish. Why is the second case worse than the second? It seems better to me. The implementation doesn't call iomap for each page. It allocates all the pages and then calls iomap for the range. > Then, there might be a problem in your implementation. > if 'use_list' is set to true here: > + bool use_list = mapping->a_ops->readpages; > > Your code do not call add_to_page_cache_lru for the page. It can't. The readpages implementation has to call add_to_page_cache_lru. But for filesystems which use readpage or readahead, we can put the pages in the page cache before calling readahead. > And later, you replace 'iomap_next_page' with 'readahead_page' > +static inline > +struct page *readahead_page(struct address_space *mapping, loff_t pos) > +{ > + struct page *page = xa_load(&mapping->i_pages, pos / PAGE_SIZE); > + VM_BUG_ON_PAGE(!PageLocked(page), page); > + > + return page; > +} > + > > It seems that the page will never add to page cache. At the same time, the iomap code is switched from ->readpages to ->readahead, so yes, the pages are added to the page cache.