Finally I could isolate the issue further. I tried following kernels and hardware. Issue is visible only with IBM + SLES 11. 1. SLES 11 + IBM HW --> Issue is Visible 2. SLES 11 + HP, Sun HW --> Issue is not Visible 2. 2.6.32 Vanilla + Any HW --> Issue is not Visible 3. 2.6.36 Vanilla + Any HW --> Issue is not Visible HP has same hardware as IBM. Both Nehalem. Sun is bit old Opteron. Any thoughts ? __ Tharindu R Bamunuarachchi. On Thu, Oct 28, 2010 at 3:06 AM, Cliff Wickman <cpw@xxxxxxx> wrote: > Hi Tharindu, > > On Tue, Oct 26, 2010 at 09:57:53PM +0530, Tharindu Rukshan Bamunuarachchi wrote: >> Dear All, >> >> Today, we experienced abnormal memory allocation behavior. >> I do not know whether this is the expected behavior or due to misconfiguration. >> >> I have two node NUMA system and 100G TMPFS mount. >> >> 1. When "dd" running freely (without CPU affinity) all memory pages >> were allocated from NODE 0 and then from NODE 1. >> >> 2. When "dd" running bound (using taskset) to CPU core in NODE 1 .... >> Â Â All memory pages were started to be allocated from NODE 1. >> Â Â BUT machine stopped responding after exhausting NODE 1. >> Â Â No memory pages were started to be allocated from NODE 0. >> >> Why "dd" cannot allocate memory from NODE 0 when it is running bound >> to NODE 1 CPU core ? >> >> Please help. >> I am using SLES 11 with 2.6.27 kernel. > > I'm no expert on the taskset command, but from what I can see, it > just uses sched_setaffinity() to set cpu affinity. ÂI don't see any > set_mempolicy calls to affect memory affinity. ÂSo I see no reason > for restricting memory allocation. > You're not using some other placement mechanism in conjunction with > taskset, are you? ÂA cpuset for example? > > -Cliff > -- > Cliff Wickman > SGI > cpw@xxxxxxx > (651) 683-3824 > -- To unsubscribe from this list: send the line "unsubscribe linux-numa" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html