On Fri, Nov 11, 2016 at 2:27 AM, Al Viro <viro@xxxxxxxxxxxxxxxxxx> wrote: > On Fri, Nov 11, 2016 at 02:11:56AM +0200, Amir Goldstein wrote: >> Yeah, I wrote in the cover letter that I did not generate performance >> numbers yet, >> which is a must for this sort of work, and that I am hoping to get some feedback >> from testers. >> But the serialization I am trying to avoid is between copy-ups and whiteouts of >> different overlay mounts, all on the same fs, which is the case with >> docker/rocket >> containers. >> >> Not ruling out that I am barking up the wrong tree. The burden of >> proof is on me. > > Surely, the copying of data itself is outside of that lock, isn't it? Mmmmmm, no it isn't, but I am going to make it right. > And renames proper, especially if there is any kind of contention going on, > will be on the metadata hot in cache, so I would really like to see the > actual evidence of contention-related performance issues... I am afraid I won't find these evidence. I did a small benchmark of 2 parallel rm -rf processes on 2 different overlay mount over the same fs. In that test, every process spends 20% of the time on vfs_whiteout and 10% on vfs_rename (of those whiteouts). mutex_lock/unlock of s_vfs_rename_mutex takes 4% of the time, but the lock covers both the whiteout and rename. I estimate that after we fix the coarse grained rename_lock, it will be quite hard to demonstrate contention-related performance issues due to s_vfs_rename_mutex. Thanks for taking the time to set me straight. Amir. -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html