Re: [RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 2019-03-12 at 10:59 +0800, Jason Wang wrote:
> On 2019/3/12 上午2:14, David Miller wrote:
> > From: "Michael S. Tsirkin" <mst@xxxxxxxxxx>
> > Date: Mon, 11 Mar 2019 09:59:28 -0400
> > 
> > > On Mon, Mar 11, 2019 at 03:13:17PM +0800, Jason Wang wrote:
> > > > On 2019/3/8 下午10:12, Christoph Hellwig wrote:
> > > > > On Wed, Mar 06, 2019 at 02:18:07AM -0500, Jason Wang wrote:
> > > > > > This series tries to access virtqueue metadata through
> > > > > > kernel virtual
> > > > > > address instead of copy_user() friends since they had too
> > > > > > much
> > > > > > overheads like checks, spec barriers or even hardware
> > > > > > feature
> > > > > > toggling. This is done through setup kernel address through
> > > > > > vmap() and
> > > > > > resigter MMU notifier for invalidation.
> > > > > > 
> > > > > > Test shows about 24% improvement on TX PPS. TCP_STREAM
> > > > > > doesn't see
> > > > > > obvious improvement.
> > > > > 
> > > > > How is this going to work for CPUs with virtually tagged
> > > > > caches?
> > > > 
> > > > Anything different that you worry?
> > > 
> > > If caches have virtual tags then kernel and userspace view of
> > > memory
> > > might not be automatically in sync if they access memory
> > > through different virtual addresses. You need to do things like
> > > flush_cache_page, probably multiple times.
> > 
> > "flush_dcache_page()"
> 
> 
> I get this. Then I think the current set_bit_to_user() is suspicious,
> we 
> probably miss a flush_dcache_page() there:
> 
> 
> static int set_bit_to_user(int nr, void __user *addr)
> {
>          unsigned long log = (unsigned long)addr;
>          struct page *page;
>          void *base;
>          int bit = nr + (log % PAGE_SIZE) * 8;
>          int r;
> 
>          r = get_user_pages_fast(log, 1, 1, &page);
>          if (r < 0)
>                  return r;
>          BUG_ON(r != 1);
>          base = kmap_atomic(page);
>          set_bit(bit, base);
>          kunmap_atomic(base);

This sequence should be OK.  get_user_pages() contains a flush which
clears the cache above the user virtual address, so on kmap, the page
is coherent at the new alias.  On parisc at least, kunmap embodies a
flush_dcache_page() which pushes any changes in the cache above the
kernel virtual address back to main memory and makes it coherent again
for the user alias to pick it up.

James




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux