RE: IVSHMEM device performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> -----Original Message-----
> From: kvm-owner@xxxxxxxxxxxxxxx [mailto:kvm-owner@xxxxxxxxxxxxxxx] On
> Behalf Of Eli Britstein
> Sent: Monday, April 11, 2016 2:21 PM
> To: kvm@xxxxxxxxxxxxxxx
> Subject: IVSHMEM device performance
> 
> Hi
> 
> In a VM, I add a IVSHMEM device, on which the MBUFS mempool resides, and
> also rings I create (I run a DPDK application in the VM).
> I saw there is a performance penalty if I use such device, instead of hugepages
> (the VM's hugepages). My VM's memory is *NOT* backed with host's hugepages.
> The memory behind the IVSHMEM device is a host hugepage (I use a patched
> version of QEMU, as provided by Intel).
> I thought maybe the reason is that this memory is seen by the VM as a mapped
> PCI memory region, so it is not cached, but I am not sure.
> So, my direction was to change the kernel (in the VM) so it will consider this
> memory as a regular memory (and thus cached), instead of a PCI memory region.
> However, I am not sure my direction is correct, and even if so, I am not sure
> how/where to change the kernel (my starting point was  mm/mmap.c, but I'm
> not sure it's the correct place to start).
> 
> Any suggestion is welcomed.
> Thanks,
> Eli.

Hi Eli,

I think you can try the following two things:
1. use ioremap_cache() to map the PCI bar, instead of ioremap();
2. touch the entire bar memory before using it.

Best,
Wei

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux