David Laight <David.Laight@xxxxxxxxxx> wrote: > You could also just not do the copy! > Although you need (say) asm volatile("\n",:::"memory") to > stop it all being completely optimised away. > That might show up a difference in the 'out_of_line' test > where 15% on top on the data copies is massive - it may be > that the data cache behaviour is very different for the > two cases. I tried using the following as the load: volatile unsigned long foo; static __always_inline size_t idle_user_iter(void __user *iter_from, size_t progress, size_t len, void *to, void *priv2) { nop(); nop(); foo += (unsigned long)iter_from; foo += (unsigned long)len; foo += (unsigned long)to + progress; nop(); nop(); return 0; } static __always_inline size_t idle_kernel_iter(void *iter_from, size_t progress, size_t len, void *to, void *priv2) { nop(); nop(); foo += (unsigned long)iter_from; foo += (unsigned long)len; foo += (unsigned long)to + progress; nop(); nop(); return 0; } size_t iov_iter_idle(struct iov_iter *iter, size_t len, void *priv) { return iterate_and_advance(iter, len, priv, idle_user_iter, idle_kernel_iter); } EXPORT_SYMBOL(iov_iter_idle); adding various things into a volatile variable to prevent the optimiser from discarding the calculations. I get: iov_kunit_benchmark_bvec: avg 395 uS, stddev 46 uS iov_kunit_benchmark_bvec: avg 397 uS, stddev 38 uS iov_kunit_benchmark_bvec: avg 411 uS, stddev 57 uS iov_kunit_benchmark_bvec_outofline: avg 781 uS, stddev 5 uS iov_kunit_benchmark_bvec_outofline: avg 781 uS, stddev 6 uS iov_kunit_benchmark_bvec_outofline: avg 781 uS, stddev 7 uS iov_kunit_benchmark_bvec_split: avg 3599 uS, stddev 737 uS iov_kunit_benchmark_bvec_split: avg 3664 uS, stddev 838 uS iov_kunit_benchmark_bvec_split: avg 3669 uS, stddev 875 uS iov_kunit_benchmark_iovec: avg 472 uS, stddev 17 uS iov_kunit_benchmark_iovec: avg 506 uS, stddev 59 uS iov_kunit_benchmark_iovec: avg 525 uS, stddev 14 uS iov_kunit_benchmark_kvec: avg 421 uS, stddev 73 uS iov_kunit_benchmark_kvec: avg 428 uS, stddev 68 uS iov_kunit_benchmark_kvec: avg 469 uS, stddev 75 uS iov_kunit_benchmark_ubuf: avg 1052 uS, stddev 6 uS iov_kunit_benchmark_ubuf: avg 1168 uS, stddev 8 uS iov_kunit_benchmark_ubuf: avg 1168 uS, stddev 9 uS iov_kunit_benchmark_xarray: avg 680 uS, stddev 11 uS iov_kunit_benchmark_xarray: avg 682 uS, stddev 20 uS iov_kunit_benchmark_xarray: avg 686 uS, stddev 46 uS iov_kunit_benchmark_xarray_outofline: avg 1340 uS, stddev 34 uS iov_kunit_benchmark_xarray_outofline: avg 1358 uS, stddev 12 uS iov_kunit_benchmark_xarray_outofline: avg 1358 uS, stddev 15 uS where I made the iovec and kvec tests split their buffers into PAGE_SIZE segments and the ubuf test issue an iteration per PAGE_SIZE'd chunk. Splitting kvec into just 8 results in the iteration taking <1uS. The bvec_split test is doing a kmalloc() per 256 pages inside of the loop, which is why that takes quite a long time. David