* Greg Thelen <gthelen@xxxxxxxxxx> [2010-07-27 23:16:54]: > KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> writes: > > > From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> > > > > This patch replaces page_cgroup's bit_spinlock with spinlock. In general, > > spinlock has good implementation than bit_spin_lock and we should use > > it if we have a room for it. In 64bit arch, we have extra 4bytes. > > Let's use it. > > > > Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> > > -- > > Index: mmotm-0719/include/linux/page_cgroup.h > > =================================================================== > > --- mmotm-0719.orig/include/linux/page_cgroup.h > > +++ mmotm-0719/include/linux/page_cgroup.h > > @@ -10,8 +10,14 @@ > > * All page cgroups are allocated at boot or memory hotplug event, > > * then the page cgroup for pfn always exists. > > */ > > +#ifdef CONFIG_64BIT > > +#define PCG_HAS_SPINLOCK > > +#endif > > struct page_cgroup { > > unsigned long flags; > > +#ifdef PCG_HAS_SPINLOCK > > + spinlock_t lock; > > +#endif > > unsigned short mem_cgroup; /* ID of assigned memory cgroup */ > > unsigned short blk_cgroup; /* Not Used..but will be. */ > > struct page *page; > > @@ -90,6 +96,16 @@ static inline enum zone_type page_cgroup > > return page_zonenum(pc->page); > > } > > > > +#ifdef PCG_HAS_SPINLOCK > > +static inline void lock_page_cgroup(struct page_cgroup *pc) > > +{ > > + spin_lock(&pc->lock); > > +} > > This is minor issue, but this patch breaks usage of PageCgroupLocked(). > Example from __mem_cgroup_move_account() cases panic: > VM_BUG_ON(!PageCgroupLocked(pc)); > > I assume that this patch should also delete the following: > - PCG_LOCK definition from page_cgroup.h > - TESTPCGFLAG(Locked, LOCK) from page_cgroup.h > - PCGF_LOCK from memcontrol.c > Good catch! But from my understanding of the code we use spinlock_t only for 64 bit systems, so we still need the PCG* and TESTPGFLAGS. > > +static inline void unlock_page_cgroup(struct page_cgroup *pc) > > +{ > > + spin_unlock(&pc->lock); > > +} > > +#else > > static inline void lock_page_cgroup(struct page_cgroup *pc) > > { > > bit_spin_lock(PCG_LOCK, &pc->flags); > > @@ -99,6 +115,7 @@ static inline void unlock_page_cgroup(st > > { > > bit_spin_unlock(PCG_LOCK, &pc->flags); > > } > > +#endif > > > > static inline void SetPCGFileFlag(struct page_cgroup *pc, int idx) > > { > > Index: mmotm-0719/mm/page_cgroup.c > > =================================================================== > > --- mmotm-0719.orig/mm/page_cgroup.c > > +++ mmotm-0719/mm/page_cgroup.c > > @@ -17,6 +17,9 @@ __init_page_cgroup(struct page_cgroup *p > > pc->mem_cgroup = 0; > > pc->page = pfn_to_page(pfn); > > INIT_LIST_HEAD(&pc->lru); > > +#ifdef PCG_HAS_SPINLOCK > > + spin_lock_init(&pc->lock); > > +#endif > > } > > static unsigned long total_usage; > > -- Three Cheers, Balbir -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>