Linux page cache issue?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

If a Linux process opens and reads a file A, then it closes the file.
Will Linux keep the file A's data in cache for a while in case another
process opens and reads the same in a short time? I think that is what
I heard before.

But after I digged into the kernel code, I am confused.

When a process closes the file A, iput() will be called, which in turn
calls the follows two functions:
iput_final()->generic_drop_inode()

But from the following calling chain, we can see that file close will
eventually lead to evict and free all cached pages. Actually in
truncate_complete_page(), the pages will be freed.  This seems to
imply that Linux has to re-read the same data from disk even if
another process B read the same file right after process A closes the
file. That does not make sense to me.

/***calling chain ***/
generic_delete_inode/generic_forget_inode()->
truncate_inode_pages()->truncate_inode_pages_range()->
truncate_complete_page()->remove_from_page_cache()->
__remove_from_page_cache()->radix_tree_delete()

Am I missing something? Can someone please provide some advise?

Thanks a lot
-x
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux