On Wed, Mar 3, 2010 at 2:52 AM, Miao Xie <miaox@xxxxxxxxxxxxxx> wrote: > if MAX_NUMNODES > BITS_PER_LONG, loading/storing task->mems_allowed or mems_allowed in > task->mempolicy are not atomic operations, and the kernel page allocator gets an empty > mems_allowed when updating task->mems_allowed or mems_allowed in task->mempolicy. So we > use a rwlock to protect them to fix this probelm. Rather than adding locks, if the intention is just to avoid the allocator seeing an empty nodemask couldn't we instead do the equivalent of: current->mems_allowed |= new_mask; current->mems_allowed = new_mask; i.e. effectively set all new bits in the nodemask first, and then clear all old bits that are no longer in the new mask. The only downside of this is that a page allocation that races with the update could potentially allocate from any node in the union of the old and new nodemasks - but that's the case anyway for an allocation that races with an update, so I don't see that it's any worse. Paul -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>