Hi, From: "Bert Wesarg" <bert.wesarg@xxxxxxxxxxxxxx> Subject: Re: [RESEND][RFC] lscpu - CPU architecture information helper Date: Wed, 9 Jul 2008 09:07:08 +0200 > Hi, > > On Wed, Jul 9, 2008 at 05:50, Cai Qian <qcai@xxxxxxxxxx> wrote: > > From: "Bert Wesarg" <bert.wesarg@xxxxxxxxxxxxxx> > > Subject: Re: [RESEND][RFC] lscpu - CPU architecture information helper > > Date: Tue, 8 Jul 2008 18:04:21 +0200 > > > >> > /* Convert hexdecimal number from a mapping file to decimal.*/ > >> > double > >> > decimal (char *file) > >> So you extend your supported number of cpus from 32 to 52, great. You > >> should either look inside the kernel code, kernel documentationm or > >> into the libbitmask(3) from Paul Jackson/SGI [1]. > >> > > > > Hmm, it has been tested successfully on a 64 CPUs SGI machine though, > but sibling(decimal(<file with "0xffffffffffffffff" as content>)) is 1 > not 64. But you can easily solve this, because you only use this > pattern 'sibling (decimal (<file>))' and effectively count only the > set bits: > > /* count the set bit in a cpumap file */ > int > sibling (char *file) > { > int c; > int result = 0; > File *fp; > > fp = fopen (file, "r"); > if (fp == NULL) > err (1, "fopen %s", file); > > while ((c = fgetc (fp)) != EOF) > { > if (isxdigit (c)) > result += <number of bits in char 'c'>; > } > > fclose (fp); > > return result; > } > > and than replace all 'sibling (decimal (buf))' with 'sibling (buf)' > > >> Again, no support for holes in the cpu range. But I'm currently unsure > >> if holes are actually possible. What about using readdir and check for > >> a cpu%d pattern of the dir name with sscanf? > >> > > > > There should not be a hole. Otherwise, a Kernel bug? I have tried to > > offline a CPU, but it does still has a entry there. > That should be true. I have dealt mostly with online cpus only, but > this doesn't fit here. > > >> What about nodes >0, in node0/cpumap are only the cpus from node0, you > >> should also check the cpumaps from the other nodes. > >> > > > > So there is a possibility that different nodes have different number of > > CPUs? If so, that looks like too complicated to handle in this program, > > and I'll probably remove NUMA information at all. > Sure, just remove one cpu from a node. > OK, the program reads through every node directory. Tested on one of the SGI Altix box (node1 has all CPUs removed), $ /usr/bin/lscpu -p #The following is the parsable format, which can be fed to other #programs. Each different item in every column has a unique ID #starting from zero. # #CPU,Core,Socket,Node,L1d,L1i,L2,L3 0,0,0,0,0,0,0,0 1,1,1,0,1,1,1,1 2,2,2,2,2,2,2,2 3,3,3,2,3,3,3,3 4,4,4,3,4,4,4,4 5,5,5,3,5,5,5,5 6,6,6,4,6,6,6,6 7,7,7,4,7,7,7,7 8,8,8,5,8,8,8,8 9,9,9,5,9,9,9,9 10,10,10,6,10,10,10,10 11,11,11,6,11,11,11,11 12,12,12,7,12,12,12,12 13,13,13,7,13,13,13,13 14,14,14,8,14,14,14,14 15,15,15,8,15,15,15,15 16,16,16,9,16,16,16,16 17,17,17,9,17,17,17,17 18,18,18,10,18,18,18,18 19,19,19,10,19,19,19,19 20,20,20,11,20,20,20,20 21,21,21,11,21,21,21,21 22,22,22,12,22,22,22,22 23,23,23,12,23,23,23,23 24,24,24,13,24,24,24,24 25,25,25,13,25,25,25,25 26,26,26,14,26,26,26,26 27,27,27,14,27,27,27,27 28,28,28,15,28,28,28,28 29,29,29,15,29,29,29,29 30,30,30,16,30,30,30,30 31,31,31,16,31,31,31,31 32,32,32,17,32,32,32,32 33,33,33,17,33,33,33,33 34,34,34,18,34,34,34,34 35,35,35,18,35,35,35,35 36,36,36,19,36,36,36,36 37,37,37,19,37,37,37,37 38,38,38,20,38,38,38,38 39,39,39,20,39,39,39,39 40,40,40,21,40,40,40,40 41,41,41,21,41,41,41,41 42,42,42,22,42,42,42,42 43,43,43,22,43,43,43,43 44,44,44,23,44,44,44,44 45,45,45,23,45,45,45,45 46,46,46,24,46,46,46,46 47,47,47,24,47,47,47,47 48,48,48,25,48,48,48,48 49,49,49,25,49,49,49,49 50,50,50,26,50,50,50,50 51,51,51,26,51,51,51,51 52,52,52,27,52,52,52,52 53,53,53,27,53,53,53,53 54,54,54,28,54,54,54,54 55,55,55,28,55,55,55,55 56,56,56,29,56,56,56,56 57,57,57,29,57,57,57,57 58,58,58,30,58,58,58,58 59,59,59,30,59,59,59,59 60,60,60,31,60,60,60,60 61,61,61,31,61,61,61,61 CaiQian > Bert > > > > CaiQian -- To unsubscribe from this list: send the line "unsubscribe util-linux-ng" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html