Re: [RESEND][RFC] lscpu - CPU architecture information helper

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: "Bert Wesarg" <bert.wesarg@xxxxxxxxxxxxxx>
Subject: Re: [RESEND][RFC] lscpu - CPU architecture information helper
Date: Tue, 8 Jul 2008 18:04:21 +0200

> On Tue, Jul 8, 2008 at 14:54, Cai Qian <qcai@xxxxxxxxxx> wrote:
> > Hi,
> >
> > Thanks for all your comments. I have rewritten the tool in C, and made
> > several changes. The new output is like the following,
> >
> > $ /usr/bin/lscpu
> > CPU(s):                8
> > Thread(s) per core:    2
> > Core(s) per socket:    2
> > CPU socket(s):         2
> > NUMA node(s):          1
> > Vendor ID:             GenuineIntel
> > CPU family:            Itanium 2
> > Model:                 0
> > CPU MHz:               1598.000005
> > L1d cache:             16K
> > L1i cache:             16K
> > L2d cache:             256K
> > L2i cache:             1024K
> > L3 cache:              12288K
> >
> > $ /usr/bin/lscpu -p
> > #The following is the parsable format, which can be fed to other
> > #programs. Each different item in every column has a unique ID
> > #starting from zero.
> > #
> > #CPU,Core,Socket,Node,L1d,L1i,L2d,L2i,L3
> > 0,0,0,0,0,0,0,0,0
> > 1,0,0,0,0,0,0,0,0
> > 2,1,0,0,1,1,1,0,0
> > 3,1,0,0,1,1,1,0,0
> > 4,2,1,0,2,2,2,1,1
> > 5,2,1,0,2,2,2,1,1
> > 6,3,1,0,3,3,3,1,1
> > 7,3,1,0,3,3,3,1,1
> >
> > If you are happy about the output. I'll tidy up the code a little bit,
> > do more testing, and create a manpage.
> >
> > Thanks,
> > CaiQian
> >
> > /*
> >  lscpu - CPU architecture information helper
> >  Copyright (C) 2008 Cai Qian <qcai@xxxxxxxxxx>
> >
> >  This program is free software: you can redistribute it and/or modify
> >  it under the terms of the GNU General Public License as published by
> >  the Free Software Foundation, either version 3 of the License, or
> >  (at your option) any later version.
> >
> >   This program is distributed in the hope that it will be useful,
> >   but WITHOUT ANY WARRANTY; without even the implied warranty of
> >   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> >   GNU General Public License for more details.
> >
> >   You should have received a copy of the GNU General Public License
> >   along with this program.  If not, see <http://www.gnu.org/licenses/>.
> > */
> >
> > #include <ctype.h>
> > #include <dirent.h>
> > #include <err.h>
> > #include <errno.h>
> > #include <fcntl.h>
> > #include <getopt.h>
> > #include <math.h>
> > #include <stdio.h>
> > #include <stdlib.h>
> > #include <string.h>
> > #include <sys/stat.h>
> > #include <sys/types.h>
> > #include <sys/utsname.h>
> > #include <unistd.h>
> >
> > #define BASE 2
> > #define CACHE_MAX 100
> > #define MARK_LABEL ":"
> >
> > /* Calculate on number of silbings from a decimal mapping. */
> > int
> > sibling (double mapping)
> > {
> >  int i = 0, j = 0;
> >
> >  while (mapping != 0)
> >    {
> >      i++;
> >      j = (int) (log (mapping) / log (BASE));
> >      mapping -= pow (BASE, j);
> >    }
> >
> >  return i;
> > }
> >
> > /* Convert hexdecimal number from a mapping file to decimal.*/
> > double
> > decimal (char *file)
> So you extend your supported number of cpus from 32 to 52, great. You
> should either look inside the kernel code, kernel documentationm or
> into the libbitmask(3) from Paul Jackson/SGI [1].
> 

Hmm, it has been tested successfully on a 64 CPUs SGI machine though,

$ lscpu
CPU(s):                64
Thread(s) per core:    1
Core(s) per socket:    1
CPU socket(s):         64
NUMA node(s):          32
Vendor ID:             GenuineIntel
CPU family:            Itanium 2
Model:                 2
CPU MHz:               1500.000000
L1d cache:             16K
L1i cache:             16K
L2 cache:              256K
L3 cache:              4096K

$ lscpu -p
#The following is the parsable format, which can be fed to other
#programs. Each different item in every column has a unique ID
#starting from zero.
#
#CPU,Core,Socket,Node,L1d,L1i,L2,L3
0,0,0,0,0,0,0,0
1,1,1,0,1,1,1,1
2,2,2,1,2,2,2,2
3,3,3,1,3,3,3,3
4,4,4,2,4,4,4,4
5,5,5,2,5,5,5,5
6,6,6,3,6,6,6,6
7,7,7,3,7,7,7,7
8,8,8,4,8,8,8,8
9,9,9,4,9,9,9,9
10,10,10,5,10,10,10,10
11,11,11,5,11,11,11,11
12,12,12,6,12,12,12,12
13,13,13,6,13,13,13,13
14,14,14,7,14,14,14,14
15,15,15,7,15,15,15,15
16,16,16,8,16,16,16,16
17,17,17,8,17,17,17,17
18,18,18,9,18,18,18,18
19,19,19,9,19,19,19,19
20,20,20,10,20,20,20,20
21,21,21,10,21,21,21,21
22,22,22,11,22,22,22,22
23,23,23,11,23,23,23,23
24,24,24,12,24,24,24,24
25,25,25,12,25,25,25,25
26,26,26,13,26,26,26,26
27,27,27,13,27,27,27,27
28,28,28,14,28,28,28,28
29,29,29,14,29,29,29,29
30,30,30,15,30,30,30,30
31,31,31,15,31,31,31,31
32,32,32,16,32,32,32,32
33,33,33,16,33,33,33,33
34,34,34,17,34,34,34,34
35,35,35,17,35,35,35,35
36,36,36,18,36,36,36,36
37,37,37,18,37,37,37,37
38,38,38,19,38,38,38,38
39,39,39,19,39,39,39,39
40,40,40,20,40,40,40,40
41,41,41,20,41,41,41,41
42,42,42,21,42,42,42,42
43,43,43,21,43,43,43,43
44,44,44,22,44,44,44,44
45,45,45,22,45,45,45,45
46,46,46,23,46,46,46,46
47,47,47,23,47,47,47,47
48,48,48,24,48,48,48,48
49,49,49,24,49,49,49,49
50,50,50,25,50,50,50,50
51,51,51,25,51,51,51,51
52,52,52,26,52,52,52,52
53,53,53,26,53,53,53,53
54,54,54,27,54,54,54,54
55,55,55,27,55,55,55,55
56,56,56,28,56,56,56,56
57,57,57,28,57,57,57,57
58,58,58,29,58,58,58,58
59,59,59,29,59,59,59,59
60,60,60,30,60,60,60,60
61,61,61,30,61,61,61,61
62,62,62,31,62,62,62,62
63,63,63,31,63,63,63,63


> >  /* number of CPUs */
> >  for (;;)
> >    {
> >      sprintf (buf, "%s/cpu/cpu%d", syspath, cpu);
> >      if (stat (buf, &info) == 0)
> >        cpu++;
> >      else
> >        break;
> >    }
> Again, no support for holes in the cpu range. But I'm currently unsure
> if holes are actually possible. What about using readdir and check for
> a cpu%d pattern of the dir name with sscanf?
>

There should not be a hole. Otherwise, a Kernel bug? I have tried to
offline a CPU, but it does still has a entry there.
 
> >  if (have_topology)
> >    {
> >      /* number of threads */
> >      sprintf (buf, "%s/topology/thread_siblings", cpu0path);
> >      thread = sibling (decimal (buf));
> >
> >      /* number of cores */
> >      sprintf (buf, "%s/topology/core_siblings", cpu0path);
> >      core = sibling (decimal (buf)) / thread;
> >
> >      /* number of sockets */
> >      socket = cpu / core / thread;
> Sockets can also be counted by the physical_package_id topology
> attribute. But some older architectures have a bug with this [2].
> 
> >  if (have_node)
> >    {
> >      /* number of NUMA node */
> >      for (;;)
> >        {
> >          sprintf (buf, "%s/node/node%d", syspath, node);
> >          if (stat (buf, &info) == 0)
> >            node++;
> >          else
> >            break;
> >        }
> >
> >      /* information about how nodes share different CPUs */
> >      sprintf (buf, "%s/cpumap", node0path);
> >      nodecpu = sibling (decimal (buf));
> What about nodes >0, in node0/cpumap are only the cpus from node0, you
> should also check the cpumaps from the other nodes.
> 

So there is a possibility that different nodes have different number of
CPUs? If so, that looks like too complicated to handle in this program,
and I'll probably remove NUMA information at all.

> By the way, you check only cpu0 for topology and cache infos, I don't
> know if it is possible to have cpus of different topology/cache types
> in one system. Anyone?
> 

I don't know either, but the program at the moment does not handle the
system installed with different physical processors.

CaiQian

> Regards
> Bert
> 
> [1] http://oss.sgi.com/projects/cpusets/
> [2] http://lkml.org/lkml/2008/5/13/429
--
To unsubscribe from this list: send the line "unsubscribe util-linux-ng" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux