RE: [EXT] how to disable readahead

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>> >> >>
>> >> >> And then used btrace to monitor the I/O requests sent to the device:
>> >> >>
>> >> >> 252,4    0      413     0.077274997 14645  Q   R 4408 + 8 [dd]
>> >> >> 252,4    2       77     0.077355648  5529  C   R 4408 + 8 [0]
>> >> >> 252,4    0      414     0.077393725 14645  Q   R 4416 + 8 [dd]
>> >> >> 252,4    2       78     0.077630722  5529  C   R 4416 + 8 [0]
>> >> >> 	...
>> >> >>
>> >> >> ... and indeed, the reads are being sent to the device in 4k chunks.
>> >> >> That's indeed surprising.  I'd have to do some debugging with
>> >> >> tracepoints to see what requests are being issued from the
>> >> >> mm/filemap.c to the file system.
>> >> >
>> >> > And this is in fact expected. There are two basic ways how data
>> >> > can appear in page cache: ->readpage and ->readpages filesystem
>> >> > callbacks. The second one is what readahead (and only readahead)
>> >> > uses, the first one is used as a fallback when readahead fails
>> >> > for some reason. So if you disable readahead, you're left only
>> >> > with -
>> >>readpage call which does only one-page (4k) reads.
>> >>
>> >> Even *with* readahead, why would we add the overhead of processing
>> >> each page separately instead of handling all pages in a single
>> >> batch via
>> >readpages()?
>> >
>> >Hum, I don't understand. With readahead enabled, we should be
>> >submitting larger batches of IO as generated by ->readpages call and
>> >->readpage actually never ends up issuing any IO (see how
>> >generic_file_buffered_read() calls
>> >page_cache_sync_readahead() first which ends up locking pages and
>> >submitting reads) and only then we go, search for the page again and
>> >lock it - which effectively waits for the readahead to pull in the first page.
>> >
>> >								Honza
>> >--
>> >Jan Kara <jack@xxxxxxxx>
>> >SUSE Labs, CR
>>
>> 'read_ahead_kb' should be only used for the read ahead (second time
>> read internal), should be used as a flag to change the first read request
>chunk size came from user space read.
>> Even the 'read_ahead_kb' configured 0.
>
>OK, so you made me look into details how the read request size gets
>computed :).  The thing is: When read_ahead_kb is 0, we really do single page
>reads as all the cleverness in trying to issue large read requests gets disabled.
>Once read_ahead_kb is >0 (you have to write there at least PAGE_SIZE - i.e.  4
>on x86_64), we will actually issue requests of size at least requested in the
>syscall.
>
>								Honza
>--
>Jan Kara <jack@xxxxxxxx>
>SUSE Labs, CR

Meanwhile, I noticed that if 'read_ahead_kb' is 128(128KB), when you read the data in 512KB chunk size,
The 512KB request data length will be split into 4*128KB requests to read from HW device;
When the 'read_ahead_kb' is 512 (512kB), the 512kB chunk read request will directly pass to lower layers.
This also doesn't make sense.  Lower layers can buffer 512KB size data, 512KB shouldn't be split into 4 times 128KB.


--Bean Huo





[Index of Archives]     [Reiser Filesystem Development]     [Ceph FS]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite National Park]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]     [Linux Media]

  Powered by Linux