Re: [PATCH v2 2/2] virtio-ring: Allocate indirect buffers from cache when possible

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Aug 29, 2012 at 05:03:03PM +0200, Sasha Levin wrote:
> On 08/29/2012 01:07 PM, Michael S. Tsirkin wrote:
> > On Tue, Aug 28, 2012 at 03:35:00PM +0200, Sasha Levin wrote:
> >> On 08/28/2012 03:20 PM, Michael S. Tsirkin wrote:
> >>> On Tue, Aug 28, 2012 at 03:04:03PM +0200, Sasha Levin wrote:
> >>>> Currently if VIRTIO_RING_F_INDIRECT_DESC is enabled we will
> >>>> use indirect descriptors and allocate them using a simple
> >>>> kmalloc().
> >>>>
> >>>> This patch adds a cache which will allow indirect buffers under
> >>>> a configurable size to be allocated from that cache instead.
> >>>>
> >>>> Signed-off-by: Sasha Levin <levinsasha928@xxxxxxxxx>
> >>>
> >>> I imagine this helps performance? Any numbers?
> >>
> >> I ran benchmarks on the original RFC, I've re-tested it now and got similar
> >> numbers to the original ones (virtio-net using vhost-net, thresh=16):
> >>
> >> Before:
> >> 	Recv   Send    Send
> >> 	Socket Socket  Message  Elapsed
> >> 	Size   Size    Size     Time     Throughput
> >> 	bytes  bytes   bytes    secs.    10^6bits/sec
> >>
> >> 	 87380  16384  16384    10.00    4512.12
> >>
> >> After:
> >> 	Recv   Send    Send
> >> 	Socket Socket  Message  Elapsed
> >> 	Size   Size    Size     Time     Throughput
> >> 	bytes  bytes   bytes    secs.    10^6bits/sec
> >>
> >> 	 87380  16384  16384    10.00    5399.18
> >>
> >>
> >> Thanks,
> >> Sasha
> > 
> > This is with both patches 1 + 2?
> > Sorry could you please also test what happens if you apply
> > - just patch 1
> > - just patch 2
> > 
> > Thanks!
> 
> Sure thing!
> 
> I've also re-ran it on a IBM server type host instead of my laptop. Here are the
> results:
> 
> Vanilla kernel:
> 
> MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.33.1
> () port 0 AF_INET
> enable_enobufs failed: getprotobyname
> Recv   Send    Send
> Socket Socket  Message  Elapsed
> Size   Size    Size     Time     Throughput
> bytes  bytes   bytes    secs.    10^6bits/sec
> 
>  87380  16384  16384    10.00    7922.72
> 
> Patch 1, with threshold=16:
> 
> MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.33.1
> () port 0 AF_INET
> enable_enobufs failed: getprotobyname
> Recv   Send    Send
> Socket Socket  Message  Elapsed
> Size   Size    Size     Time     Throughput
> bytes  bytes   bytes    secs.    10^6bits/sec
> 
>  87380  16384  16384    10.00    8415.07
> 
> Patch 2:
> 
> MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.33.1
> () port 0 AF_INET
> enable_enobufs failed: getprotobyname
> Recv   Send    Send
> Socket Socket  Message  Elapsed
> Size   Size    Size     Time     Throughput
> bytes  bytes   bytes    secs.    10^6bits/sec
> 
>  87380  16384  16384    10.00    8931.05
> 
> 
> Note that these are simple tests with netperf listening on one end and a simple
> 'netperf -H [host]' within the guest. If there are other tests which may be
> interesting please let me know.
> 
> 
> Thanks,
> Sasha


And which parameter did you use for patch 2?

_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux