Hi, > > > > > > > > > > > The ra_sit_pages() tries to read consecutive sit pages as many as > > > > possible. > > > > So then, what about just checking whether its block address is > > > > contiguous or not? > > > > > > > > Something like this: > > > > -ra_sit_pages() > > > > blkno = start; > > > > while (blkno < sit_i->sit_blocks) { > > > > blk_addr = current_sit_addr(sbi, blkno); > > > > if (blkno != start && prev_blk_addr + 1 != blk_addr) > > > > break; > > > > > > > > /* grab and submit_read_page */ > > > > > > > > prev_blk_addr = blk_addr; > > > > blkno++; > > > > } > > > > > > Agreed, this method could remove *order. > > > Shouldn't we add nrpages for readahead policy as VM? > > > > Aha, agreed. > > We need nrpages to avoid too many reads on sit blocks. > > > > But, still it needs to change the nrpages in its caller. > > In your patch, it was sit_i->sit_blocks that is total # of sit blocks. > > I think 128 or 256 is quite reasonable number. > > Hmm, Originally in [PATCH V1] it was be set to > MAX_BIO_BLOCKS(max_hw_blocks(sbi)). > > So it could be "#define SIT_ENTRIES_RA_NUM 128"? > BTW, maybe we should send dynamical nrpages which depend on > memory state of system as I mention in previous thread. > How do you think? I think it'd be better to use MAX_BIO_BLOCKS(). > > > > > Anyway, how about implementing ra_sit_pages() with a blk_plug likewise > > ra_node_pages()? My mistake. This was the ra_nat_pages(). > > So we use this structure to plug multi bios submitting in ra_sit_pages(), right? > -build_sit_entries() > blk_start_plug(&plug); > ra_sit_pages(); > blk_finish_plug(&plug); Ah. What I meant was: blk_start_plug(&plug); for() read_sit_page(); blk_finish_plug(&plug); But, it is not a big deal. It doesn't matter to use your approach. Please ignore this. BTW, I found that we can use submit_read_page() at ra_nat_pages() and remove block plugging. I'll send a patch for this. :) Thanks, -- Jaegeuk Kim Samsung -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html