Re: [PATCH] mm: do not drain pagevecs for mlock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2011/12/30 Tao Ma <tm@xxxxxx>:
> In our test of mlock, we have found some severe performance regression
> in it. Some more investigations show that mlocked is blocked heavily
> by lur_add_drain_all which calls schedule_on_each_cpu and flush the work
> queue which is very slower if we have several cpus.
>
> So we have tried 2 ways to solve it:
> 1. Add a per cpu counter for all the pagevecs so that we don't schedule
>   and flush the lru_drain work if the cpu doesn't have any pagevecs(I
>   have finished the codes already).
> 2. Remove the lru_add_drain_all.
>
> The first one has some problems since in our product system, all the cpus
> are busy, so I guess there is very little chance for a cpu to have 0 pagevecs
> except that you run several consecutive mlocks.
>
> From the commit log which added this function(8891d6da), it seems that we
> don't have to call it. So the 2nd one seems to be both easy and workable and
> comes this patch.

Could you please show us your system environment and benchmark programs?
Usually lru_drain_** is very fast than mlock() body because it makes
plenty memset(page).

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]