On Thu, Jan 11, 2018 at 5:59 AM, Chengguang Xu <cgxu519@xxxxxxxxxx> wrote: >> >> 在 2018年1月11日,上午11:13,Amir Goldstein <amir73il@xxxxxxxxx> 写道: >> >> On Thu, Jan 11, 2018 at 2:53 AM, Chengguang Xu <cgxu519@xxxxxxxxxx> wrote: >>>> >> [...] >>>>>>> + mutex_lock(&upper_sb->s_sync_lock); >>>>>> >>>>>> So it makes some sense to use upper_sb sync lock >>>>>> to synchronize with callers of sync(2)/syncfs(2) on the upper fs, >>>>>> but that could result syncfs() on overlay to take much longer if >>>>>> there is a syncfs() on upper fs in progress. >>>>> >>>>> I have two proposals for this. >>>>> >>>>> A. >>>>> After changing wait list to global, I think waiting without >>>>> upper_sb->s_sync_lock is still safe, so we can avoid coupling >>>>> with syncfs of upper filesystem and I prefer this. >>>>> >>>>> B. >>>>> If you are worried about solution A, then we can introduce a mutex_lock >>>>> inside overlayfs for serializing concurrent sync waiting operations. >>>>> >>>> >>>> No need to introduce a new mutex you should just use sb->s_sync_lock >>>> and BTW, option A is probably safe w.r.t races, because according to >>>> comment above wait_sb_inodes(), s_sync_lock is meant to improve >>>> performance, not to avoid races. >>> >>> I don’t think s_sync_lock here is only for performance, option A is safe because >>> we do not manipulate sb->s_inodes_wb list, but wait_sb_inodes() will do. >>> >> >> See commit e97fedb9ef98 ("sync: serialise per-superblock sync operations") >> s_sync_lock was added to solve a performance issue. > > commit e97fedb9ef98 is indeed for performance issue, > but you can see following commit 6c60d2b5746cf23 ("fs/fs-writeback.c: add a new writeback list for sync”), > in this commit, change to local temporary waiting list instead of iterating all inode of filesystem, > so now I think it’s also for avoiding race condition. > Maybe. But it looks to me like s_inode_wblist_lock takes care of the race conditions. Thanks, Amir. -- To unsubscribe from this list: send the line "unsubscribe linux-unionfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html