drm_clflush_pages performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, 15 Sep 2012 18:06:03 -0400, Dave Airlie <airlied at gmail.com> wrote:
> On Sat, Sep 15, 2012 at 10:41 AM, hank peng <pengxihan at gmail.com> wrote:
> > I noticed that drm_clflush_pages function will first choose clfush
> > instead of wbinvd, its code like this:
> >
> > void
> > drm_clflush_pages(struct page *pages[], unsigned long num_pages)
> > {
> >
> > #if defined(CONFIG_X86)
> >         if (cpu_has_clflush) {
> >                 drm_cache_flush_clflush(pages, num_pages);
> >                 return;
> >         }
> >
> >         if (on_each_cpu(drm_clflush_ipi_handler, NULL, 1) != 0)
> >                 printk(KERN_ERR "Timed out waiting for cache flush.\n");
> >
> >
> > I think using clfush will be slower than using wbinvd, so I wonder if
> > I use wbinvd first, what else impact will it bring?
> 
> clflush is faster than wbinvd for a lot use cases,
> 
> There may be a threshold point where it makes sense to wbinvd, but it
> will affect all processes using the cache not just ones using the
> specific pages.

The other factor is that on recent machines the cost of
smp_function_call() outweighs the cost of flushing the cache to memory.
I made the unfortunate mistake of accidentally enabling the wbinvd path
recently...
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre


[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux