On Wed, Feb 08, 2023 at 08:01:01AM -0800, Luis Chamberlain wrote: > On Tue, Feb 07, 2023 at 04:01:51AM +0000, Matthew Wilcox wrote: > > On Mon, Feb 06, 2023 at 06:52:59PM -0800, Luis Chamberlain wrote: > > > @@ -1334,11 +1336,15 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc) > > > struct shmem_inode_info *info; > > > struct address_space *mapping = folio->mapping; > > > struct inode *inode = mapping->host; > > > + struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb); > > > swp_entry_t swap; > > > pgoff_t index; > > > > > > BUG_ON(!folio_test_locked(folio)); > > > > > > + if (wbc->for_reclaim && unlikely(sbinfo->noswap)) > > > + return AOP_WRITEPAGE_ACTIVATE; > > > > Not sure this is the best way to handle this. We'll still incur the > > oevrhead of tracking shmem pages on the LRU, only to fail to write them > > out when the VM thinks we should get rid of them. We'd be better off > > not putting them on the LRU in the first place. > > Ah, makes sense, so in effect then if we do that then on reclaim > we should be able to even WARN_ON(sbinfo->noswap) assuming we did > everthing right. > > Hrm, we have invalidate_mapping_pages(mapping, 0, -1) but that seems a bit > too late how about d_mark_dontcache() on shmem_get_inode() instead? I was thinking that the two calls to folio_add_lru() in mm/shmem.c should be conditional on sbinfo->noswap.