On Fri, Jan 08, 2021 at 01:50:34PM -0800, Elijah Newren wrote: > On Fri, Jan 8, 2021 at 12:59 PM Taylor Blau <ttaylorr@xxxxxxxxxx> wrote: > > > > On Fri, Jan 08, 2021 at 12:51:11PM -0800, Elijah Newren wrote: > > > Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames, > > > 10 runs for the other two cases): > > > > Ah, I love hyperfine. In case you don't already have this in your > > arsenal, the following `--prepare` step is useful for measuring > > cold-cache performance: > > > > --prepare='sync; echo 3 | sudo tee /proc/sys/vm/drop_caches' > > /proc/sys/vm/drop_caches is definitely useful for cold-cache > measurements and I've used it in other projects for that purpose. I > think cold-cache testing makes sense for various I/O intensive areas > such as object lookup, but I ignored it here as I felt the merge code > is really about algorithmic performance. Yes, I agree that the interesting thing here is algorithmic performance moreso than I/O. > So, I instead went the other direction and ensured warm-cache testing > by using a warmup run, in order to ensure that I wasn't putting one of > the tests at an unfair disadvantage. I often use it for both. Combining that `--prepare` step with at least one `--warmup` invocation is useful to make sure that your I/O cache is warmed only with the things it might want to read during your timing tests. (Probably one `--warmup` without dumping the cache is fine, since you will likely end up evicting things out of your cache that you don't care about, but I digress..) Thanks, Taylor