Hello Guys, I found a interesting thing on GFS2 file system, After I did a direct IO write for a whole file, I still saw there were some page caches in this inode. It looks this GFS2 behavior does not follow file system POSIX semantics, I just want to know this problem belongs to a know issue or we can fix it? By the way, I did the same testing on EXT4 and OCFS2 file systems, the result looks OK. I will paste my testing command lines and outputs as below, For EXT4 file system, tb-nd1:/mnt/ext4 # rm -rf f3 tb-nd1:/mnt/ext4 # dd if=/dev/urandom of=./f3 bs=1M count=4 oflag=direct 4+0 records in 4+0 records out 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0393563 s, 107 MB/s tb-nd1:/mnt/ext4 # vmtouch -v f3 f3 [ ] 0/1024 Files: 1 Directories: 0 Resident Pages: 0/1024 0/4M 0% Elapsed: 0.000424 seconds tb-nd1:/mnt/ext4 # For OCFS2 file system, tb-nd1:/mnt/ocfs2 # rm -rf f3 tb-nd1:/mnt/ocfs2 # dd if=/dev/urandom of=./f3 bs=1M count=4 oflag=direct 4+0 records in 4+0 records out 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0592058 s, 70.8 MB/s tb-nd1:/mnt/ocfs2 # vmtouch -v f3 f3 [ ] 0/1024 Files: 1 Directories: 0 Resident Pages: 0/1024 0/4M 0% Elapsed: 0.000226 seconds For GFS2 file system, tb-nd1:/mnt/gfs2 # rm -rf f3 tb-nd1:/mnt/gfs2 # dd if=/dev/urandom of=./f3 bs=1M count=4 oflag=direct 4+0 records in 4+0 records out 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0579509 s, 72.4 MB/s tb-nd1:/mnt/gfs2 # vmtouch -v f3 f3 [ oo oOo ] 48/1024 Files: 1 Directories: 0 Resident Pages: 48/1024 192K/4M 4.69% Elapsed: 0.000287 seconds For vmtouch tool, you can download it's source code from https://github.com/hoytech/vmtouch I also printk the inode's address_space after a full file direct-IO write in kernel space, the nrpages value in the inode's address_space is always greater than zero. Thanks Gang