Hi, I am trying to allocate memory in my NUMA-compatible AMD opteron machine using the numa_alloc_onnode. I am getting a Memory not available error (errno 12). But the nodes have far more memory that what is being allocated. I am trying to allocate a simple structure typedef struct { int N; int nz; float *value; /* scalar values */ int *index; /* offsets */ } sparseVector; using a for loop like this: for(long int i=0; i<100000000; i++) { ret = alloc_onnode(sizeof(sparseVector) ,0, "niceee"); if(ret == NULL) { fprintf (stderr, "Error during memory allocation. ErrNo:%d - %s\n Memory allocated %ld ",errno, strerror(errno), i*1000000); exit(0); } } I am always getting an error once I allocate 1572264 bytes of memory. But the machine has 60 GB of ram and so each node roughly has 20 gb of ram. This should be more than enough for allocating the memory that I need. Also if I substitute the alloc_onnode with malloc, it allocates memory without any problems. Please let me know what could be the cause of this problem.. Is there any kind of linux parameters that are limiting the total memory that can be allocated?? I have been battling with this for a month now. So please spare me some time and give me some suggestions if you can. Ajay. -- To unsubscribe from this list: send the line "unsubscribe linux-numa" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html