Re: [EXTERNAL] Re: What throughput is reasonable?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2019-03-27 at 19:02 +0000, Phillips, Tony wrote:
> So after a little digging and documentation reading, I think what's
> happening here is that the VM thinks it has a 10GBE NIC, but in
> reality, the underlying hardware is only 1GBE.  Yes, the host
> hardware has multiple NIC cards, but each VM gets pinned to one of
> them.  (So it's VM-level traffic distribution, not flow-level.)
> 
> My limited understanding of the NetPerf docs says that it should be
> interpreted thusly:
> 
> MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
> NetPerfServerHost () port 0 AF_INET
> Socket  Message  Elapsed      Messages                
> Size    Size     Time         Okay Errors   Throughput
> bytes   bytes    secs            #      #   10^6bits/sec
> 
> 212992    1024   10.00     11789372      0    9657.81
> 124928           10.00      203530            166.73
> 
> The first line (begins with 212992) says The SENDER'S socket size
> (that is the VPN client) is 212992 bytes.
> The Message size (payload) is 1024 bytes.  Ran for 10 seconds.  It
> attempted to send 11,789,372 bytes in 10 seconds, giving a throughput
> of 9657.81 ATTEMPTED traffic.
> 
> The second line (begins with 124928) says the RECEIVER'S socket size
> (the netperf target) is 124928 bytes.
> 
> Of the 11,789,372 packets attempted, only 203530 arrived, giving an
> actual network throughput of 166.73 Mbit/sec of UDP traffic.

OK, that makes more sense. Thanks.

> I suppose it's important to clarify that the last test case (no VPN)
> that's actually bare metal server (not VM). But it's using the same
> network infrastructure. 

Hm, is there a way to eliminate that difference? Given that we're still
kind of confused about what's causing this, it would be good to
eliminate all differences, even if they don't *seem* likely to be part
of the problem. 

So looking back at your results, it seems that even 512-byte packets
are slow; so it's not fragmentation of packets that are just too large
for the link, causing every other packet on the wire to be tiny.

What next? ... Let's see if the VM is actually managing to send any
more than the ~150MB/s that netperf says was successfully received by
the other side. Watching the network status in both VM and host side
(and the egress from the host towards the VPN server... do they concur?
Are they only seeing ~150MB/s coming out of the guest in the first
place?

We said we were filling the guest kernel's UDP socket buffer... but we
didn't prove that we were *consistently* filling it. IF you watch the
sendq ('netstat -un') while the transfer is happening, does it stay
fairly full or is it fluctuating? 

What was the alternative setup that was getting line rate from this
VPN? Was it the same VM host, and guest kernel?

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
openconnect-devel mailing list
openconnect-devel@xxxxxxxxxxxxxxxxxxx
http://lists.infradead.org/mailman/listinfo/openconnect-devel

[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux