On Fri, Jan 8, 2021 at 1:55 PM Taylor Blau <ttaylorr@xxxxxxxxxx> wrote: > > On Fri, Jan 08, 2021 at 01:50:34PM -0800, Elijah Newren wrote: > > On Fri, Jan 8, 2021 at 12:59 PM Taylor Blau <ttaylorr@xxxxxxxxxx> wrote: > > > > > > On Fri, Jan 08, 2021 at 12:51:11PM -0800, Elijah Newren wrote: > > > > Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames, > > > > 10 runs for the other two cases): > > > > > > Ah, I love hyperfine. In case you don't already have this in your > > > arsenal, the following `--prepare` step is useful for measuring > > > cold-cache performance: > > > > > > --prepare='sync; echo 3 | sudo tee /proc/sys/vm/drop_caches' > > > > /proc/sys/vm/drop_caches is definitely useful for cold-cache > > measurements and I've used it in other projects for that purpose. I > > think cold-cache testing makes sense for various I/O intensive areas > > such as object lookup, but I ignored it here as I felt the merge code > > is really about algorithmic performance. > > Yes, I agree that the interesting thing here is algorithmic performance > moreso than I/O. > > > So, I instead went the other direction and ensured warm-cache testing > > by using a warmup run, in order to ensure that I wasn't putting one of > > the tests at an unfair disadvantage. > > I often use it for both. Combining that `--prepare` step with at least > one `--warmup` invocation is useful to make sure that your I/O cache is > warmed only with the things it might want to read during your timing > tests. (Probably one `--warmup` without dumping the cache is fine, since > you will likely end up evicting things out of your cache that you don't > care about, but I digress..) Ah, that hadn't occurred to me, but it makes sense. Thanks for the tip; I may give it a try at some point. I worry slightly that it might increase the run-to-run noise instead of decreasing it since I'm committing sins by not running the performance tests on a quiet server but on my laptop with a full GUI running -- a few year old, nearly-bottom-of-the-line Dell refurbished grade B laptop with spinny disks. Dropping disk caches would lower the risk of needing to spend time evicting other things from the warm cache, but would increase the risk that some background GUI thing or system daemon needs to read from the hard disk when it wouldn't have needed to otherwise, and if the timing of that disk read is unfortunately placed, then it could slow down I/O I care about. I guess there's only one way to find out if it'd help or hurt though...