>> >>OK, so you made me look into details how the read request size gets >>computed :). The thing is: When read_ahead_kb is 0, we really do >>single page reads as all the cleverness in trying to issue large read requests >gets disabled. >>Once read_ahead_kb is >0 (you have to write there at least PAGE_SIZE - >>i.e. 4 on x86_64), we will actually issue requests of size at least >>requested in the syscall. >> >> Honza >>-- >>Jan Kara <jack@xxxxxxxx> >>SUSE Labs, CR > >Meanwhile, I noticed that if 'read_ahead_kb' is 128(128KB), when you read >the data in 512KB chunk size, The 512KB request data length will be split into >4*128KB requests to read from HW device; When the 'read_ahead_kb' is 512 >(512kB), the 512kB chunk read request will directly pass to lower layers. >This also doesn't make sense. Lower layers can buffer 512KB size data, 512KB >shouldn't be split into 4 times 128KB. > > >--Bean Huo > I did simple performance testing on my platform, there is huge impact on the read performance: Testing condition: Sync I/O, read/write chunk size is 512KB read_ahead_kb = 128, random read 443215.64 kB/sec random write 364662.21 kB/sec sequential read 503381.95 kB/sec read_ahead_kb = 512, random read 534232.84 kB/sec random write 336783.41 kB/sec sequential read 544225.89 kB/sec --Bean huo