Thanks for your back Mike, I did some tests with ovs2.0 and the number of tcp connections up to 1 000 000 easily. So if I understand well the deal is not in relation with vhost_net or it can have a link between them? http://i.imgur.com/kJJWtOk.png s. ----- Original Message ----- From: "Mike Dawson" <mike.dawson@xxxxxxxxxxxx> To: "Sahid Ferdjaoui" <sahid.ferdjaoui@xxxxxxxxxxxxx>, kvm@xxxxxxxxxxxxxxx Sent: Sunday, October 20, 2013 5:52:36 PM Subject: Re: virtio: Large number of tcp connections, vhost_net seems to be a bottleneck On 10/20/2013 4:04 AM, Sahid Ferdjaoui wrote: > Hi all, > > I'm working on create a large number of tcp connections on a guest; > The environment is on OpenStack: > > Host (dedicated compute node): > OS/Kernel: Ubuntu/3.2 > Cpus: 24 > Mems: 128GB > > Guest (alone on the Host): > OS/Kernel: Ubuntu/3.2 > Cpus: 4 > Mems: 32GB > > Currently a guest can handle about 700 000 established connections, the cpus are not loaded and 12giga of memory are used. > I'm working to understand why I can go up... > > On my host, after several tests with different versions of openvswitch and with linux bridge, > It look like the process vhost_net is the only process loaded to 100% and it seems vhost_net cannot use more than 1 cpu. > > I would like to get more informations about vhost_net and if there is a solution to configure it to use more than 1 cpu? Not sure if it is relevant in your case, but the newly-released Open vSwitch 2.0 is now multi-threaded: http://openvswitch.org/releases/NEWS-2.0.0 @martin_casado said "This is a big deal. Multi-threading provides huge performance benefits on flow setup" https://twitter.com/martin_casado/status/390384030488616960 If you give it a try, let us know. - Mike > > Thanks a lot, > s. > -- > To unsubscribe from this list: send the line "unsubscribe kvm" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html > -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html