On Mon, 22 Jun 2009 08:38:50 -0700 Yinghai Lu <yinghai@xxxxxxxxxx> wrote: > Nathan reported that > | commit 73d60b7f747176dbdff826c4127d22e1fd3f9f74 > | Author: Yinghai Lu <yinghai@xxxxxxxxxx> > | Date: Tue Jun 16 15:33:00 2009 -0700 > | > | page-allocator: clear N_HIGH_MEMORY map before we set it again > | > | SRAT tables may contains nodes of very small size. The arch code may > | decide to not activate such a node. However, currently the early boot > | code sets N_HIGH_MEMORY for such nodes. These nodes therefore seem to be > | active although these nodes have no present pages. > | > | For 64bit N_HIGH_MEMORY == N_NORMAL_MEMORY, so that works for 64 bit too > > the cpuset.mems cgroup attribute on an i386 kvm guest > > fix it by only clearing node_states[N_NORMAL_MEMORY] for 64bit only. > and need to do save/restore for that in find_zone_movable_pfn > There appear to be some words omitted from this changelog - it doesn't make sense. I think that perhaps a line got deleted before "the cpuset.mems cgroup ...". That was the line which actualy describes the bug which we're fixing. Or perhaps it was a single word? "zeroes". I did this: Nathan reported that : : | commit 73d60b7f747176dbdff826c4127d22e1fd3f9f74 : | Author: Yinghai Lu <yinghai@xxxxxxxxxx> : | Date: Tue Jun 16 15:33:00 2009 -0700 : | : | page-allocator: clear N_HIGH_MEMORY map before we set it again : | : | SRAT tables may contains nodes of very small size. The arch code may : | decide to not activate such a node. However, currently the early boot : | code sets N_HIGH_MEMORY for such nodes. These nodes therefore seem to be : | active although these nodes have no present pages. : | : | For 64bit N_HIGH_MEMORY == N_NORMAL_MEMORY, so that works for 64 bit too : : unintentionally and incorrectly clears the cpuset.mems cgroup attribute on : an i386 kvm guest : : Fix this by only clearing node_states[N_NORMAL_MEMORY] for 64bit only. : and need to do save/restore for that in find_zone_movable_pfn Please check whether that is correct. If not, how should it be changed? _______________________________________________ Containers mailing list Containers@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linux-foundation.org/mailman/listinfo/containers