Hi, Recently I got a very weird test failure, btrfs/266 on Aarch64 with 64K page size. The test case itself is utilizing the read time repair ability of btrfs, and the test case is already supporting larger page sizes. After quite some digging, the root cause is pinned down to the test case itself, mostly related to the "echo 3 > /proc/sys/vm/drop_caches" behavior. The TL;DR is, at least on Aarch64 with 64K page size, "echo 3 > /proc/sys/vm/drop_caches" doesn't ensure all the page cache is dropped, thus later operations can still use the page cache. For now I can change the test case to use directIO to avoid populating page cache at all. But considering this "echo 3 > drop_caches" behavior is utilized by a lot of test scripts, I'm wondering is there guarantee all non-dirty page cache dropped before returning? (Aka, sync or async) And is the behavior platform/page size specific? I haven't hit the same problem on x86_64 at all. Thanks, Qu