On 04/17/2018 02:39 PM, Ben Peart wrote:
On 4/17/2018 12:34 PM, Jameson Miller wrote:
100K
Test baseline [4] block_allocation
------------------------------------------------------------------------------------
0002.1: read_cache/discard_cache 1 times 0.03(0.01+0.01)
0.02(0.01+0.01) -33.3%
1M:
Test baseline block_allocation
------------------------------------------------------------------------------------
0002.1: read_cache/discard_cache 1 times 0.23(0.12+0.11)
0.17(0.07+0.09) -26.1%
2M:
Test baseline block_allocation
------------------------------------------------------------------------------------
0002.1: read_cache/discard_cache 1 times 0.45(0.26+0.19)
0.39(0.17+0.20) -13.3%
100K is not a large enough sample size to show the perf impact of this
change, but we can see a perf improvement with 1M and 2M entries.
I see a 33% change with 100K files which is a substantial improvement
even in the 100K case. I do see that the actual wall clock savings
aren't nearly as much with a small repo as it is with the larger repos
which makes sense.
You are correct - I should have been more careful in my wording. What I
meant is that the wall time savings with 100K is not large, because this
operation is already very fast.