On Wed 19-04-17 14:02:56, Josef Bacik wrote: > > > On Apr 19, 2017, at 3:02 AM, Jan Kara <jack@xxxxxxx> wrote: > > > > On Tue 18-04-17 16:04:17, Josef Bacik wrote: > >> By default we set DCACHE_REFERENCED and I_REFERENCED on any dentry or > >> inode we create. This is problematic as this means that it takes two > >> trips through the LRU for any of these objects to be reclaimed, > >> regardless of their actual lifetime. With enough pressure from these > >> caches we can easily evict our working set from page cache with single > >> use objects. So instead only set *REFERENCED if we've already been > >> added to the LRU list. This means that we've been touched since the > >> first time we were accessed, and so more likely to need to hang out in > >> cache. > >> > >> To illustrate this issue I wrote the following scripts > >> > >> https://github.com/josefbacik/debug-scripts/tree/master/cache-pressure > >> > >> on my test box. It is a single socket 4 core CPU with 16gib of RAM > >> and I tested on an Intel 2tib NVME drive. The cache-pressure.sh > >> script creates a new file system and creates 2 6.5gib files in order > >> to take up 13gib of the 16gib of ram with pagecache. Then it runs a > >> test program that reads these 2 files in a loop, and keeps track of > >> how often it has to read bytes for each loop. On an ideal system with > >> no pressure we should have to read 0 bytes indefinitely. The second > >> thing this script does is start a fs_mark job that creates a ton of 0 > >> length files, putting pressure on the system with slab only > >> allocations. On exit the script prints out how many bytes were read > >> by the read-file program. The results are as follows > >> > >> Without patch: /mnt/btrfs-test/reads/file1: total read during loops > >> 27262988288 /mnt/btrfs-test/reads/file2: total read during loops > >> 27262976000 > >> > >> With patch: /mnt/btrfs-test/reads/file2: total read during loops > >> 18640457728 /mnt/btrfs-test/reads/file1: total read during loops > >> 9565376512 > >> > >> This patch results in a 50% reduction of the amount of pages evicted > >> from our working set. > >> > >> Signed-off-by: Josef Bacik <jbacik@xxxxxx> > > > > To me this makes sense and I was myself wondering why we add dentries > > and inodes to the lru list with referenced bit set in the past. I would > > just note that the patch is changing the balance between dentry/inode > > cache eviction and page cache eviction (so that pages have lower chance > > of being evicted) and your test measures only the benefit of less pages > > being evicted. So to be fair you should also complement it with a test > > where dentries & inodes actually do get eventually reused and how that > > works out in presence of memory pressure from use-once pages in the > > page cache. > > > > Yup that's fair. I wrote up a different script to exercise the warm > icache/dcache vs once use pages, the scripts are in the same place as my > original script > > https://github.com/josefbacik/debug-scripts/tree/master/cache-pressure > > Basically create 3million empty nodes, create a 100gib sparse file, stat > in a loop all the files while cat'ing the sparse file to /dev/null. > There's a bcc script in there to count how many times the stat'ing > programs allocate a new inode, and with and without my patch the test > exited with those processes creating 0 nodes. This makes sense as we > will create the files and they will not have REFERENCED, and then as soon > as the files are read in again they will be marked with REFERENCED, so > they become just as hard to remove as they were originally. OK, sounds good. Feel free to add: Reviewed-by: Jan Kara <jack@xxxxxxx> Honza -- Jan Kara <jack@xxxxxxxx> SUSE Labs, CR