Re: [PATCH v8 0/9] rwsem performance optimizations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2013-10-03 at 09:32 +0200, Ingo Molnar wrote:
> * Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx> wrote:
> 
> > For version 8 of the patchset, we included the patch from Waiman to 
> > streamline wakeup operations and also optimize the MCS lock used in 
> > rwsem and mutex.
> 
> I'd be feeling a lot easier about this patch series if you also had 
> performance figures that show how mmap_sem is affected.
> 
> These:
> 
> > Tim got the following improvement for exim mail server 
> > workload on 40 core system:
> > 
> > Alex+Tim's patchset:    	   +4.8%
> > Alex+Tim+Waiman's patchset:        +5.3%
> 
> appear to be mostly related to the anon_vma->rwsem. But once that lock is 
> changed to an rwlock_t, this measurement falls away.
> 
> Peter Zijlstra suggested the following testcase:
> 
> ===============================>
> In fact, try something like this from userspace:
> 
> n-threads:
> 
>   pthread_mutex_lock(&mutex);
>   foo = mmap();
>   pthread_mutex_lock(&mutex);
> 
>   /* work */
> 
>   pthread_mutex_unlock(&mutex);
>   munma(foo);
>   pthread_mutex_unlock(&mutex);
> 
> vs
> 
> n-threads:
> 
>   foo = mmap();
>   /* work */
>   munmap(foo);


Ingo,

I ran the vanilla kernel, the kernel with all rwsem patches and the
kernel with all patches except the optimistic spin one.  
I am listing two presentations of the data.  Please note that
there is about 5% run-run variation.

% change in performance vs vanilla kernel
#threads	all	without optspin
mmap only		
1		1.9%	1.6%
5		43.8%	2.6%
10		22.7%	-3.0%
20		-12.0%	-4.5%
40		-26.9%	-2.0%
mmap with mutex acquisition		
1		-2.1%	-3.0%
5		-1.9%	1.0%
10		4.2%	12.5%
20		-4.1%	0.6%
40		-2.8%	-1.9%

The optimistic spin case does very well at low to moderate contentions,
but worse when there are very heavy contentions for the pure mmap case.
For the case with pthread mutex, there's not much change from vanilla
kernel.

% change in performance of the mmap with pthread-mutex vs pure mmap
#threads	vanilla	all	without optspin
1		3.0%	-1.0%	-1.7%
5		7.2%	-26.8%	5.5%
10		5.2%	-10.6%	22.1%
20		6.8%	16.4%	12.5%
40		-0.2%	32.7%	0.0%

In general, vanilla and no-optspin case perform better with 
pthread-mutex.  For the case with optspin, mmap with 
pthread-mutex is worse at low to moderate contention and better
at high contention.

Tim

> 
> I've had reports that the former was significantly faster than the
> latter.
> <===============================
> 
> this could be put into a standalone testcase, or you could add it as a new 
> subcommand of 'perf bench', which already has some pthread code, see for 
> example in tools/perf/bench/sched-messaging.c. Adding:
> 
>    perf bench mm threads
> 
> or so would be a natural thing to have.
> 
> Thanks,
> 
> 	Ingo
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]