On Thu 18-11-21 20:02:09, Chengguang Xu wrote: > ---- 在 星期四, 2021-11-18 19:23:15 Jan Kara <jack@xxxxxxx> 撰写 ---- > > On Thu 18-11-21 14:32:36, Chengguang Xu wrote: > > > > > > ---- 在 星期三, 2021-11-17 14:11:29 Chengguang Xu <cgxu519@xxxxxxxxxxxx> 撰写 ---- > > > > ---- 在 星期二, 2021-11-16 20:35:55 Miklos Szeredi <miklos@xxxxxxxxxx> 撰写 ---- > > > > > On Tue, 16 Nov 2021 at 03:20, Chengguang Xu <cgxu519@xxxxxxxxxxxx> wrote: > > > > > > > > > > > > ---- 在 星期四, 2021-10-07 21:34:19 Miklos Szeredi <miklos@xxxxxxxxxx> 撰写 ---- > > > > > > > On Thu, 7 Oct 2021 at 15:10, Chengguang Xu <cgxu519@xxxxxxxxxxxx> wrote: > > > > > > > > > However that wasn't what I was asking about. AFAICS ->write_inode() > > > > > > > > > won't start write back for dirty pages. Maybe I'm missing something, > > > > > > > > > but there it looks as if nothing will actually trigger writeback for > > > > > > > > > dirty pages in upper inode. > > > > > > > > > > > > > > > > > > > > > > > > > Actually, page writeback on upper inode will be triggered by overlayfs ->writepages and > > > > > > > > overlayfs' ->writepages will be called by vfs writeback function (i.e writeback_sb_inodes). > > > > > > > > > > > > > > Right. > > > > > > > > > > > > > > But wouldn't it be simpler to do this from ->write_inode()? > > > > > > > > > > > > > > I.e. call write_inode_now() as suggested by Jan. > > > > > > > > > > > > > > Also could just call mark_inode_dirty() on the overlay inode > > > > > > > regardless of the dirty flags on the upper inode since it shouldn't > > > > > > > matter and results in simpler logic. > > > > > > > > > > > > > > > > > > > Hi Miklos, > > > > > > > > > > > > Sorry for delayed response for this, I've been busy with another project. > > > > > > > > > > > > I agree with your suggesion above and further more how about just mark overlay inode dirty > > > > > > when it has upper inode? This approach will make marking dirtiness simple enough. > > > > > > > > > > Are you suggesting that all non-lower overlay inodes should always be dirty? > > > > > > > > > > The logic would be simple, no doubt, but there's the cost to walking > > > > > those overlay inodes which don't have a dirty upper inode, right? > > > > > > > > That's true. > > > > > > > > > Can you quantify this cost with a benchmark? Can be totally synthetic, > > > > > e.g. lookup a million upper files without modifying them, then call > > > > > syncfs. > > > > > > > > > > > > > No problem, I'll do some tests for the performance. > > > > > > > > > > Hi Miklos, > > > > > > I did some rough tests and the results like below. In practice, I don't > > > think that 1.3s extra time of syncfs will cause significant problem. > > > What do you think? > > > > Well, burning 1.3s worth of CPU time for doing nothing seems like quite a > > bit to me. I understand this is with 1000000 inodes but although that is > > quite a few it is not unheard of. If there would be several containers > > calling sync_fs(2) on the machine they could easily hog the machine... That > > is why I was originally against keeping overlay inodes always dirty and > > wanted their dirtiness to at least roughly track the real need to do > > writeback. > > > > Hi Jan, > > Actually, the time on user and sys are almost same with directly excute syncfs on underlying fs. > IMO, it only extends syncfs(2) waiting time for perticular container but not burning cpu. > What am I missing? Ah, right, I've missed that only realtime changed, not systime. I'm sorry for confusion. But why did the realtime increase so much? Are we waiting for some IO? Honza > > > Test bed: kvm vm > > > 2.50GHz cpu 32core > > > 64GB mem > > > vm kernel 5.15.0-rc1+ (with ovl syncfs patch V6) > > > > > > one millon files spread to 2 level of dir hierarchy. > > > test step: > > > 1) create testfiles in ovl upper dir > > > 2) mount overlayfs > > > 3) excute ls -lR to lookup all file in overlay merge dir > > > 4) excute slabtop to make sure overlay inode number > > > 5) call syncfs to the file in merge dir > > > > > > Tested five times and the reusults are in 1.310s ~ 1.326s > > > > > > root@VM-144-4-centos test]# time ./syncfs ovl-merge/create-file.sh > > > syncfs success > > > > > > real 0m1.310s > > > user 0m0.000s > > > sys 0m0.001s > > > [root@VM-144-4-centos test]# time ./syncfs ovl-merge/create-file.sh > > > syncfs success > > > > > > real 0m1.326s > > > user 0m0.001s > > > sys 0m0.000s > > > [root@VM-144-4-centos test]# time ./syncfs ovl-merge/create-file.sh > > > syncfs success > > > > > > real 0m1.321s > > > user 0m0.000s > > > sys 0m0.001s > > > [root@VM-144-4-centos test]# time ./syncfs ovl-merge/create-file.sh > > > syncfs success > > > > > > real 0m1.316s > > > user 0m0.000s > > > sys 0m0.001s > > > [root@VM-144-4-centos test]# time ./syncfs ovl-merge/create-file.sh > > > syncfs success > > > > > > real 0m1.314s > > > user 0m0.001s > > > sys 0m0.001s > > > > > > > > > Directly run syncfs to the file in ovl-upper dir. > > > Tested five times and the reusults are in 0.001s ~ 0.003s > > > > > > [root@VM-144-4-centos test]# time ./syncfs a > > > syncfs success > > > > > > real 0m0.002s > > > user 0m0.001s > > > sys 0m0.000s > > > [root@VM-144-4-centos test]# time ./syncfs ovl-upper/create-file.sh > > > syncfs success > > > > > > real 0m0.003s > > > user 0m0.001s > > > sys 0m0.000s > > > [root@VM-144-4-centos test]# time ./syncfs ovl-upper/create-file.sh > > > syncfs success > > > > > > real 0m0.001s > > > user 0m0.000s > > > sys 0m0.001s > > > [root@VM-144-4-centos test]# time ./syncfs ovl-upper/create-file.sh > > > syncfs success > > > > > > real 0m0.001s > > > user 0m0.000s > > > sys 0m0.001s > > > [root@VM-144-4-centos test]# time ./syncfs ovl-upper/create-file.sh > > > syncfs success > > > > > > real 0m0.001s > > > user 0m0.000s > > > sys 0m0.001s > > > [root@VM-144-4-centos test]# time ./syncfs ovl-upper/create-file.sh > > > syncfs success > > > > > > real 0m0.001s > > > user 0m0.000s > > > sys 0m0.001 > > > > > > > > > > > > > > > > > > > > -- > > Jan Kara <jack@xxxxxxxx> > > SUSE Labs, CR > > -- Jan Kara <jack@xxxxxxxx> SUSE Labs, CR