Hi, Bill, Sorry for the late response: Here is the output for the three commands you asked for: ----- numastat -cm Per-node system memory usage (in MBs): Node 1 Total ------ ----- MemTotal 12006 12006 MemFree 2628 2628 MemUsed 9379 9379 Active 523 523 Inactive 284 284 Active(anon) 249 249 Inactive(anon) 17 17 Active(file) 274 274 Inactive(file) 267 267 Unevictable 0 0 Mlocked 0 0 Dirty 0 0 Writeback 0 0 FilePages 559 559 Mapped 165 165 AnonPages 248 248 Shmem 18 18 KernelStack 7 7 PageTables 21 21 NFS_Unstable 0 0 Bounce 0 0 WritebackTmp 0 0 Slab 145 145 SReclaimable 105 105 SUnreclaim 40 40 AnonHugePages 0 0 HugePages_Total 8192 8192 HugePages_Free 0 0 HugePages_Surp 0 0 --- lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 16 On-line CPU(s) list: 0-15 Thread(s) per core: 2 Core(s) per socket: 4 Socket(s): 2 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 26 Model name: Intel(R) Xeon(R) CPU X5570 @ 2.93GHz Stepping: 5 CPU MHz: 1862.000 CPU max MHz: 2927.0000 CPU min MHz: 1596.0000 BogoMIPS: 5865.83 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 8192K NUMA node1 CPU(s): 0-15 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 popcnt lahf_lm ida dtherm tpr_shadow vnmi flexpriority ept vpid --- numactl --hardware available: 1 nodes (1) node 1 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 node 1 size: 12006 MB node 1 free: 2612 MB node distances: node 1 1: 10 Thanks, Kevin On Mon, Mar 28, 2016 at 11:55 PM, Bill Gray <bgray@xxxxxxxxxx> wrote: > Hi Kevin, > > Could you please send me the output from "numastat -cm", "lscpu" and also > from "numactl --hardware"? Thanks! > > -- Bill > > > On Sun, Mar 27, 2016 at 10:30 AM, Kevin Wilson <wkevils@xxxxxxxxx> wrote: >> >> Hello, Linux numa experts, >> >> I have a server running F23 64 bit. >> When I run >> numastat >> I see only node 1. >> >> On two other machines, I do see node 0 and not 1 (and in fact I think >> that this is the same for all the machines on which I have access in >> the past, which is quite a large number). >> >> What is the reason for this ? is it configurable somehow (BIOS, kernel >> command line, etc)? could it bee >> a problem with RAM/HW on that specific machine? >> >> Regards, >> Kevin >> -- >> To unsubscribe from this list: send the line "unsubscribe linux-numa" in >> the body of a message to majordomo@xxxxxxxxxxxxxxx >> More majordomo info at http://vger.kernel.org/majordomo-info.html > > -- To unsubscribe from this list: send the line "unsubscribe linux-numa" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html