Re: [RFC PATCH] mm: readahead: add readahead_shift into backing device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Mar 25, 2019 at 09:59:31AM -0700, Mark Salyzyn wrote:
On 03/25/2019 05:16 AM, Fengguang Wu wrote:
Martin,

On Fri, Mar 22, 2019 at 11:46:11PM +0800, Martin Liu wrote:
As the discussion https://lore.kernel.org/patchwork/patch/334982/
We know an open file's ra_pages might run out of sync from
bdi.ra_pages since sequential, random or error read. Current design
is we have to ask users to reopen the file or use fdavise system
call to get it sync. However, we might have some cases to change
system wide file ra_pages to enhance system performance such as
enhance the boot time by increasing the ra_pages or decrease it to

Do you have examples that some distro making use of larger ra_pages
for boot time optimization?

Android (if you are willing to squint and look at android-common AOSP
kernels as a Distro).

OK. I wonder how exactly Android makes use of it. Since phones are not
using hard disks, so should benefit less from large ra_pages.  Would
you kindly point me to the code?

Suppose N read streams with equal read speed. The thrash-free memory
requirement would be (N * 2 * ra_pages).

If N=1000 and ra_pages=1MB, it'd require 2GB memory. Which looks
affordable in mainstream servers.
That is 50% of the memory on a high end Android device ...

Yeah but I'm obviously not talking Android device here. Will a phone
serve 1000 concurrent read streams?

Sorry but it sounds like introducing an unnecessarily twisted new
interface. I'm afraid it fixes the pain for 0.001% users while
bringing more puzzle to the majority others.
>2B Android devices on the planet is 0.001%?

Nope. Sorry I didn't know about the Android usage.
Actually nobody mentioned it in the past discussions.

I am not defending the proposed interface though, if there is something
better that can be used, then looking into:

Then let fadvise() and shrink_readahead_size_eio() adjust that
per-file ra_pages_shift.
Sounds like this would require a lot from init to globally audit and
reduce the read-ahead for all open files?

It depends. In theory it should be possible to create a standalone
kernel module to dump the page cache and get the current snapshot of
all cached file pages. It'd be a one-shot action and don't require
continuous auditing.

[RFC] kernel facilities for cache prefetching
https://lwn.net/Articles/182128

This tool may also work. It's quick to get the list of opened files by
walking /proc/*/fd/, however not as easy to get the list of cached
file names.

https://github.com/tobert/pcstat

Perhaps we can do a simplified /proc/filecache that only dumps the
list of cached file names. Then let mincore() based tools take care
of the rest work.

Regards,
Fengguang



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux