On Thu 5/5/2016 4:57 PM, Eli Britstein wrote: > Does anyone have a direction? Have you tried to touch all the BAR area before using it? I doubt that your performance penalty comes from page faults. Best, Wei > > > -----Original Message----- > > From: kvm-owner@xxxxxxxxxxxxxxx [mailto:kvm-owner@xxxxxxxxxxxxxxx] > > On Behalf Of Eli Britstein > > Sent: Sunday, 17 April, 2016 6:58 PM > > To: Paolo Bonzini; kvm@xxxxxxxxxxxxxxx > > Subject: RE: IVSHMEM device performance > > > > OK, but if so, my suspicious about the performance penalty is wrong, > > and I don't have any other thought. > > Do you? > > > > > -----Original Message----- > > > From: Paolo Bonzini [mailto:pbonzini@xxxxxxxxxx] > > > Sent: Sunday, 17 April, 2016 3:04 PM > > > To: Eli Britstein; kvm@xxxxxxxxxxxxxxx > > > Subject: Re: IVSHMEM device performance > > > > > > > > > > > > On 17/04/2016 09:18, Eli Britstein wrote: > > > > Attached. Also, if need to change ioremap to ioremap_cache, please > > > > advise where. I assume in mmap it is already too late. > > > > > > No, you need not do that. If memory was uncached, the performance > > > penalty would be 100x or worse. > > > > > > Paolo > [Eli Britstein] > > > -----Original Message----- > > From: kvm-owner@xxxxxxxxxxxxxxx [mailto:kvm-owner@xxxxxxxxxxxxxxx] On > > Behalf Of Eli Britstein > > Sent: Monday, April 11, 2016 2:21 PM > > To: kvm@xxxxxxxxxxxxxxx > > Subject: IVSHMEM device performance > > > > Hi > > > > In a VM, I add a IVSHMEM device, on which the MBUFS mempool resides, > > and also rings I create (I run a DPDK application in the VM). > > I saw there is a performance penalty if I use such device, instead of > > hugepages (the VM's hugepages). My VM's memory is *NOT* backed with > host's hugepages. > > The memory behind the IVSHMEM device is a host hugepage (I use a > > patched version of QEMU, as provided by Intel). > > I thought maybe the reason is that this memory is seen by the VM as a > > mapped PCI memory region, so it is not cached, but I am not sure. > > So, my direction was to change the kernel (in the VM) so it will > > consider this memory as a regular memory (and thus cached), instead of a PCI > memory region. > > However, I am not sure my direction is correct, and even if so, I am > > not sure how/where to change the kernel (my starting point was > > mm/mmap.c, but I'm not sure it's the correct place to start). > > > > Any suggestion is welcomed. > > Thanks, > > Eli. > > Hi Eli, > > I think you can try the following two things: > 1. use ioremap_cache() to map the PCI bar, instead of ioremap(); 2. touch the > entire bar memory before using it. > > Best, > Wei > -------------------------------------------------------------------------------------------------- > ----------------------------------------------- > This email and any files transmitted and/or attachments with it are confidential > and proprietary information of Toga Networks Ltd., and intended solely for the > use of the individual or entity to whom they are addressed. > If you have received this email in error please notify the system manager. This > message contains confidential information of Toga Networks Ltd., and is > intended only for the individual named. If you are not the named addressee you > should not disseminate, distribute or copy this e-mail. Please notify the sender > immediately by e-mail if you have received this e-mail by mistake and delete this > e-mail from your system. If you are not the intended recipient you are notified > that disclosing, copying, distributing or taking any action in reliance on the > contents of this information is strictly prohibited. > -------------------------------------------------------------------------------------------------- > ---------------------------------------------- -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html