On Thu, Apr 30, 2009 at 09:20:58AM -0400, Alan D. Brunelle wrote: > Hi Andrea - Hi Alan, > > FYI: I ran a simple test using this code to try and gauge the overhead > incurred by enabling this technology. Using a single 400GB volume split > into two 200GB partitions I ran two processes in parallel performing a > mkfs (ext2) on each partition. First w/out cgroup io-throttle and then > with it enabled (with each task having throttling enabled to > 400MB/second (much, much more than the device is actually capable of > doing)). The idea here is to see the base overhead of just having the > io-throttle code in the paths. Interesting. I've never explicitly measured the actual overhead of the io-throttle infrastructure, I'll add a similar test to the io-throttle testcase. > > Doing 30 runs of each (w/out & w/ io-throttle enabled) shows very little > difference (time in seconds) > > w/out: min=80.196 avg=80.585 max=81.030 sdev=0.215 spread=0.834 > with: min=80.402 avg=80.836 max=81.623 sdev=0.327 spread=1.221 > > So only around 0.3% overhead - and that may not be conclusive with the > standard deviations seen. You should see less overhead with reads respect to a pure write workload, because with reads we don't need to check if the IO request occurs in a different IO context. And things should be improved with v16-rc1 (http://download.systemimager.org/~arighi/linux/patches/io-throttle/cgroup-io-throttle-v16-rc1.patch). So, it would be also interesting to analyse the overhead of a read stream compared to a write stream, as well a comparison of random reads/writes. I'll do that in my next benchmarking session. > > -- > > FYI: The test was run on 2.6.30-rc1+your patches on a 16-way x86_64 box > (128GB RAM) plus a single FC volume off of a 1Gb FC RAID controller. > > Regards, > Alan D. Brunelle > Hewlett-Packard Thanks for posting these results, -Andrea _______________________________________________ Containers mailing list Containers@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linux-foundation.org/mailman/listinfo/containers