Hi Kame, Sorry for the late response, I'm just back from vocation. : ) On Fri, Dec 28, 2012 at 8:39 AM, Kamezawa Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote: > (2012/12/26 2:22), Sha Zhengju wrote: >> From: Sha Zhengju <handai.szj@xxxxxxxxxx> >> >> Commit a8e7d49a(Fix race in create_empty_buffers() vs __set_page_dirty_buffers()) >> extracts TestSetPageDirty from __set_page_dirty and is far away from >> account_page_dirtied. But it's better to make the two operations in one single >> function to keep modular. So in order to avoid the potential race mentioned in >> commit a8e7d49a, we can hold private_lock until __set_page_dirty completes. >> There's no deadlock between ->private_lock and ->tree_lock after confirmation. >> It's a prepare patch for following memcg dirty page accounting patches. >> >> >> Here is some test numbers that before/after this patch: >> Test steps(Mem-4g, ext4): >> drop_cache; sync >> fio (ioengine=sync/write/buffered/bs=4k/size=1g/numjobs=2/group_reporting/thread) >> >> We test it for 10 times and get the average numbers: >> Before: >> write: io=2048.0MB, bw=254117KB/s, iops=63528.9 , runt= 8279msec >> lat (usec): min=1 , max=742361 , avg=30.918, stdev=1601.02 >> After: >> write: io=2048.0MB, bw=254044KB/s, iops=63510.3 , runt= 8274.4msec >> lat (usec): min=1 , max=856333 , avg=31.043, stdev=1769.32 >> >> Note that the impact is little(<1%). >> >> >> Signed-off-by: Sha Zhengju <handai.szj@xxxxxxxxxx> >> Reviewed-by: Michal Hocko <mhocko@xxxxxxx> > > Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> > > Hmm,..this change should be double-checked by vfs, I/O guys... > Now it seems they haven't paid attention here... I'll push it soon for more review. > increasing hold time of mapping->private_lock doesn't affect performance ? > > Yes, pointed by Fengguang in the previous round, mapping->private_lock and mapping->tree_lock are often contented locks that in a dd testcase they have the top #1 and #2 contention. So the numbers above are trying to find the impaction of lock contention by multiple threads(numjobs=2) writing to the same file in parallel and it seems the impact is little (<1%). I'm not sure if the test case is enough, any advice is welcomed! : ) -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>