(2012/02/02 19:10), Avi Kivity wrote:
========================================================= # of dirty pages: kvm.git (ns), with this patch (ns) 1: 102,077 ns 10,105 ns 2: 47,197 ns 9,395 ns 4: 43,563 ns 9,938 ns 8: 41,239 ns 10,618 ns 16: 42,988 ns 12,299 ns 32: 45,503 ns 14,298 ns 64: 50,915 ns 19,895 ns 128: 61,087 ns 29,260 ns 256: 81,007 ns 49,023 ns 512: 132,776 ns 86,670 ns 1024: 939,299 ns 131,496 ns 2048: 992,209 ns 250,429 ns 4096: 891,809 ns 479,280 ns 8192: 1,027,280 ns 906,971 ns (until now pretty good) (ah, for every 32-bit atomic clear mask ...) 16384: 1,270,972 ns 6,661,741 ns // 1 1 1 ... 1 32768: 1,581,335 ns 9,673,985 ns // ... 65536: 2,161,604 ns 11,466,134 ns // ... 131072: 3,253,027 ns 13,412,954 ns // ... 262144: 5,663,002 ns 16,309,924 ns // 31 31 31 ... 31 =========================================================On a 64-bit host, this will be twice as fast. Or if we use cmpxchg16b, and there are no surprises, four times as fast. It will still be slower than the original, but by a smaller margin.
Yes. I used "unsigned int" just because I wanted to use the current atomic_clear_mask() as is. We need to implement atomic_clear_mask_long() or use ...
Yeah. But I think we should switch to srcu-less dirty logs regardless. Here are you numbers, but normalized by the number of dirty pages.
Thanks, I can prepare the official patch series then, of course with more test. Takuya
dirty pages old (ns/page) new (ns/page) 1 102077 10105 2 23599 4698 4 10891 2485 8 5155 1327 16 2687 769 32 1422 447 64 796 311 128 477 229 256 316 191 512 259 169 1024 917 128 2048 484 122 4096 218 117 8192 125 111 16384 78 407 32768 48 295 65536 33 175 131072 25 102 262144 22 62 Your worst case, when considering a reasonable number of dirty pages, is 407ns/page, which is still lower than what userspace will actually do to process the page, so it's reasonable. The old method is often a lot worse than your worst case, by this metric.
-- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html