On Wed, 18 Jan 2012 11:47:03 +0100 Michal Hocko <mhocko@xxxxxxx> wrote: > On Wed 18-01-12 09:12:26, KAMEZAWA Hiroyuki wrote: > > On Tue, 17 Jan 2012 17:46:05 +0100 > > Michal Hocko <mhocko@xxxxxxx> wrote: > > > > > On Fri 13-01-12 17:40:19, KAMEZAWA Hiroyuki wrote: > [...] > > > > This patch removes PCG_MOVE_LOCK and add hashed rwlock array > > > > instead of it. This works well enough. Even when we need to > > > > take the lock, > > > > > > Hmmm, rwlocks are not popular these days very much. > > > Anyway, can we rather make it (source) memcg (bit)spinlock instead. We > > > would reduce false sharing this way and would penalize only pages from > > > the moving group. > > > > > per-memcg spinlock ? > > Yes > > > The reason I used rwlock() is to avoid disabling IRQ. This routine > > will be called by IRQ context (for dirty ratio support). So, IRQ > > disable will be required if we use spinlock. > > OK, I have missed the comment about disabling IRQs. It's true that we do > not have to be afraid about deadlocks if the lock is held only for > reading from the irq context but does the spinlock makes a performance > bottleneck? We are talking about the slowpath. > I could see the reason for the read lock when doing hashed locks because > they are global but if we make the lock per memcg then we shouldn't > interfere with other updates which are not blocked by the move. > Hm, ok. In the next version, I'll use per-memcg spinlock (with hash if necessary) Thanks, -Kame -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>