On Sun, Jan 15, 2012 at 4:22 PM, Raghavendra D Prabhu <raghu.prabhu13@xxxxxxxxx> wrote:
Hi Zheng,
Interesting analysis.* On Sun, Jan 15, 2012 at 03:17:12PM -0500, Zheng Da <zhengda1936@xxxxxxxxx> wrote:>From what I have heard it has supported it from sometime back. I think you may need to ask in xfs general ML about this.
Thanks. I was reading the code of kernel 3.0. XFS starts to support
concurrent direct IO since kernel 3.1.5.
But concurrent direct IO write still doesn't work well in kernel 3.2.
I didn't know this ML. I'll ask them for help.
>From what I saw in xfs_file_dio_aio_write code, it uses EXCL only if there is unaligned IO or there are cached pages to be invalidated after shared lock is obtained *but* it demotes that lock to SHARED just before generic_file_direct_write.
I wrote a test program that accesses a 4G file randomly (read and write), and
I ran it with 8 threads and the machine has 8 cores. It turns out that only
1 core is running. I'm pretty sure xfs_rw_ilock is locked
with XFS_IOLOCK_SHARED in xfs_file_dio_aio_write.
lockstat shows me that there is a lot of wait time in ip->i_lock. It seems
the lock is locked exclusively.
&(&ip->i_lock)->mr_lock-W: 31568 36170
0.24 20048.25 7589157.99 130154 3146848
0.00 217.70 1238310.72
&(&ip->i_lock)->mr_lock-R: 11251 11886
0.24 20043.01 2895595.18 46671 526309
0.00 63.80 264097.96
-------------------------
&(&ip->i_lock)->mr_lock 36170
[<ffffffffa03be122>] xfs_ilock+0xb2/0x110 [xfs]
&(&ip->i_lock)->mr_lock 11886
[<ffffffffa03be15a>] xfs_ilock+0xea/0x110 [xfs]
-------------------------
&(&ip->i_lock)->mr_lock 38555
[<ffffffffa03be122>] xfs_ilock+0xb2/0x110 [xfs]
&(&ip->i_lock)->mr_lock 9501
[<ffffffffa03be15a>] xfs_ilock+0xea/0x110 [xfs]
Then I used systemtap to instrument xfs_ilock and find there are at least 3
functions that lock ip->i_lock exclusively during write.
Actually, there are two locks for an inode, i_lock and i_iolock. systemtap shows me that i_iolock is already locked to SHARED, but i_lock is locked exclusively somewhere else. Even though I don't think I have found the right spot that hurts concurrency so badly.
Thanks,
Da
_______________________________________________ Kernelnewbies mailing list Kernelnewbies@xxxxxxxxxxxxxxxxx http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies