On Wed, 12 Aug 2015, Mel Gorman wrote: > There is a seqcounter that protects against spurious allocation failures > when a task is changing the allowed nodes in a cpuset. There is no need > to check the seqcounter until a cpuset exists. > > Signed-off-by: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> > Acked-by: David Rientjes <rientjes@xxxxxxxxxx> > Acked-by: Vlastimil Babka <vbabka@xxxxxxx> > --- > include/linux/cpuset.h | 6 ++++++ > 1 file changed, 6 insertions(+) > > diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h > index 1b357997cac5..6eb27cb480b7 100644 > --- a/include/linux/cpuset.h > +++ b/include/linux/cpuset.h > @@ -104,6 +104,9 @@ extern void cpuset_print_task_mems_allowed(struct task_struct *p); > */ > static inline unsigned int read_mems_allowed_begin(void) > { > + if (!cpusets_enabled()) > + return 0; > + > return read_seqcount_begin(¤t->mems_allowed_seq); > } > > @@ -115,6 +118,9 @@ static inline unsigned int read_mems_allowed_begin(void) > */ > static inline bool read_mems_allowed_retry(unsigned int seq) > { > + if (!cpusets_enabled()) > + return false; > + > return read_seqcount_retry(¤t->mems_allowed_seq, seq); > } > This patch is an obvious improvement, but I think it's also possible to change this to be if (nr_cpusets() <= 1) return false; and likewise in the existing cpusets_enabled() check in get_page_from_freelist(). A root cpuset may not exclude mems on the system so, even if mounted, there's no need to check or be worried about concurrent change when there is only one cpuset. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>