On 2018/11/14 下午3:21, Nikolay Borisov wrote: > > > On 14.11.18 г. 9:17 ч., Qu Wenruo wrote: >> >> >> On 2018/11/14 下午3:00, Qu Wenruo wrote: >>> >>> >>> On 2018/11/14 下午2:47, Nikolay Borisov wrote: >>>> >>>> >>>> On 14.11.18 г. 2:31 ч., Qu Wenruo wrote: >>>>> Hi, >>>>> >>>>> Is there any (easy) method for a fstests test case to limit the page >>>>> cache usage? >>>>> >>>>> I triggered btrfs/139 failure with 2G vRAM VM, and located the root >>>>> cause of the problem. >>>> >>>> You can always size your test vm properly. Otherwise what about the >>>> various sysctl tuning knobs? I.e Documentation/sysctl/vm.txt explains >>>> some of them: dirty_bytes, dirty_background_bytes, >>>> dirty_background_ratio, dirty_expire_centisecs >>> >>> Thanks for the hint about vm.txt! >>> >>> I just realized we could just drop_caches to force dirty page writeback, >>> without the need to tweaking the complex memory pressure/watermark >>> mechanism. >> >> Well, this doesn't work as expected. >> >> It will cause transaction commit, seems that kernel is trying too hard >> to free page cache. >> >> Is there any way to only flush dirty pages of a file? > > well fdatasync causes ->fsync to run, in btrfs that will be > btrfs_sync_file. Yes, that's the why we can't use fsync()/fdatasync(), and I'm trying to use drop_caches. > One of the first thing it does is start_ordered_ops > which does btrfs_fdatawrite_range, which in turn is filemap_fdatawrite_range So I'm afraid I have to go the complex memory pressure/watermark method. Thanks, Qu > > >> >> Thanks, >> Qu >> >>> >>> Thanks, >>> Qu >>>> >>>> >>>> So with a 2g machine the default settings are using only a fraction of >>>> the ram. If you adjust the same settings for the larger ram size you >>>> should get almost identical behavior.> >>>> >>>> >>>>> >>>>> However it's only really reproducible for small ram VM, since it could >>>>> trigger dirty page writeback due to memory pressure. >>>>> >>>>> So I'm wondering if we could do such thing even for large RAM test machine. >>>>> >>>>> Thanks, >>>>> Qu >>>>> >>