Re: [V9fs-developer] :[RFC] [PATCH 6/7] [net/9p] Read and Write side zerocopy changes for 9P2000.L protocol.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/9/2011 1:18 PM, Eric Van Hensbergen wrote:
> On Wed, Feb 9, 2011 at 3:09 PM, Venkateswararao Jujjuri (JV)
> <jvrao@xxxxxxxxxxxxxxxxxx> wrote:
>> WRITE
>>
>> IO SIZE      TOTAL SIZE       No ZC                ZC
>> 1                   1MB                22.4 kb/s         19.8 kb/s
>> 32                 32MB              711 kb/s          633 kb/s
>> 64                 64MB              1.4 mb/s          1.3  mb/s
>> 128               128MB             2.8 mb/s          2.6 mb/s
>> 256               256MB             5.6 mb/s          5.1 mb/s
>> 512               512MB            10.4 mb/s        10.2 mb/s
>> 1024              1GB               19.7 mb/s         20.4 mb/s
>> 2048              2GB               40.1 mb/s          43.7 mb/s
>> 4096              4GB               71.4 mb/s          73.1 mb/s
>>
>> READ
>> IO SIZE      TOTAL SIZE       No ZC                ZC
>> 1                   1MB                26.6 kb/s         23.1 kb/s
>> 32                 32MB              783 kb/s           734 kb/s
>> 64                 64MB              1.7 mb/s          1.5 mb/s
>> 128               128MB             3.4 mb/s          3.0 mb/s
>> 256               256MB             4.2 mb/s           5.9 mb/s
>> 512               512MB            6.9 mb/s            11.6 mb/s
>> 1024              1GB               23.3 mb/s          23.4 mb/s
>> 2048              2GB               42.5 mb/s          45.4 mb/s
>> 4096              4GB               67.4 mb/s          73.9 mb/s
>>
>> As you can see, the difference is marginal..but zc improves as the IO size
>> increases.
>> In the past we have seen tremendous improvements with different msizes.
>> It  is mostly because of shipping bigger chunks of data which is possible with
>> zero copy.
>> Also it could be my setup/box even on the host I am getting same/similar numbers.
>>
> 
> So the break even point for write is around 512 and for read it is
> somewhere between 128 and 256 -- but I think there may be some
> justification then for not doing zc for payloads of 128 or less.
> Interesting number, its the same as ERRMAX :)  These numbers will be
> different system to system of course, but I imagine on a server class
> machine the tradeoff size moves higher instead of lower (since
> processor and caches are likely to be faster).  How characteristic is
> the machine you tested it on JV?

It is a HS21 blade a two socket quad core Xeon with 4 GB memory, IO to the local
disk.
As I said, throughput on the host also in the same range...we could very well be
capped by the
disk performance. But I agree that if the iosize+hdr size < 4k we can just use
non-zero copy.
I don't think it is going to swing the pendulum of performance/complexity in
either way..but
given that we are going to allocate atleast 4k buffers, it makes sense to use it
if we can accommodate everything in there.

- JV
> 
>       -eric


--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux