Hi Xiaoyao, > > > > generic cluster just means the cluster of processors, i.e, a group of > > > > cpus/lps. It is just a middle level between die and core. > > > > > > Not sure if you mean the "cluster" device for TCG GDB? "cluster" device > > > is different with "cluster" option in -smp. > > > > No, I just mean the word 'cluster'. And I thought what you called "generic > > cluster" means "a cluster of logical processors" > > > > Below I quote the description of Yanan's commit 864c3b5c32f0: > > > > A cluster generally means a group of CPU cores which share L2 cache > > or other mid-level resources, and it is the shared resources that > > is used to improve scheduler's behavior. From the point of view of > > the size range, it's between CPU die and CPU core. For example, on > > some ARM64 Kunpeng servers, we have 6 clusters in each NUMA node, > > and 4 CPU cores in each cluster. The 4 CPU cores share a separate > > L2 cache and a L3 cache tag, which brings cache affinity advantage. > > > > What I get from it, is, cluster is just a middle level between CPU die and > > CPU core. > > Here the words "a group of CPU" is not the software concept, but a hardware > topology. When I found this material: https://www.kernel.org/doc/Documentation/devicetree/bindings/cpu/cpu-topology.txt I realized the most essential difference between cluster and module is that cluster supports nesting, i.e. it can have nesting clusters as a layer of CPU topology. Even though QEMU's description of cluster looked similar to module when it was introduced, it is impossible to envision whether ARM/RISCV and other device tree-based arches will continue to introduce nesting clusters in the future. To avoid potential conflicts, it would be better to introduce modules for x86 to differentiate them from clusters. Thanks, Zhao