On Tue, Feb 8, 2011 at 7:59 PM, Venkateswararao Jujjuri (JV) <jvrao@xxxxxxxxxxxxxxxxxx> wrote: >> >>> the implementations it should be really easy to shut me up with >>> comparison data of zc and non-zc for 1, 64, 128, 256, 512, 1024, 2048, >>> 4192 byte payloads (without caches enabled of course). >> >> I think this is good experiment will publish data. > > BTW, unless we have bigger msize with differentiating pdu sizes these experiments > may not make sense. > Not sure I agree (at least in the 1-4k scale), we are measuring the overhead of memcpy versus the overhead of mapping/pinning the additional sg -- or am I not thinking clearly. Its possible... I have not had coffee yet. I suppose with your small buffer patch series there might be some performance differences due to different allocator behavior, and while I don't think it'll be significant, it may be worth re-doing the experiment once we have that in place. Which brings up another question -- I know your team are doing functional regressions, but are they also dong performance regressions? Since we are starting into the optimization patches it may not be a bad idea to track how the changes are impacting scalability, latency, and throughput (as well as some metric of resource consumption, but that may be a harder metric to track). -eric -eric -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html