On Thu, 20 May 2010, Linus Torvalds wrote: > But that's a damn big if. Does it ever trigger in practice? I doubt it. In > practice, you'll have to fill the pages with something in the first place. > In practice, the destination of the data is such that you'll often end up > copying anyway - it won't be /dev/null. > > That's why I claim your benchmark is meaningless. It does NOT even say > what you claim it says. It does not say 1% CPU on a 200MB/s transfer, > exactly the same way my stupid pipe zero-copy didn't mean that people > could magically get MB/s throughput with 1% CPU on pipes. I'm talking about *overhead* not actual CPU usage. And I know that caches tend to reduce the effect of multiple copies, but that depends on a lot of things as well (size of request, delay between copies, etc.) Generally I've seen pretty significant reductions in overhead for eliminating each copy. I'm not saying it will always be zero copy all the way, I'm saying that less copies will tend to mean less overhead. And the same is true for making requests larger. > It says nothing at all, in short. You need to have a real source, and a > real destination. Not some empty filesystem and /dev/null destination. Sure, I will do that. It's just a lot harder to measure the effects on hardware I have access to, where the CPU speed is just damn too large compared to I/O speed. Miklos -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html