On 6/20/22 16:03, Pierre Morel wrote: > Hi, > > This new spin is essentially for coherence with the last Linux CPU > Topology patch, function testing and coding style modifications. > > Forword > ======= > > The goal of this series is to implement CPU topology for S390, it > improves the preceeding series with the implementation of books and > drawers, of non uniform CPU topology and with documentation. > > To use these patches, you will need the Linux series version 10. > You find it there: > https://lkml.org/lkml/2022/6/20/590 > > Currently this code is for KVM only, I have no idea if it is interesting > to provide a TCG patch. If ever it will be done in another series. > > To have a better understanding of the S390x CPU Topology and its > implementation in QEMU you can have a look at the documentation in the > last patch or follow the introduction here under. > > A short introduction > ==================== > > CPU Topology is described in the S390 POP with essentially the description > of two instructions: > > PTF Perform Topology function used to poll for topology change > and used to set the polarization but this part is not part of this item. > > STSI Store System Information and the SYSIB 15.1.x providing the Topology > configuration. > > S390 Topology is a 6 levels hierarchical topology with up to 5 level > of containers. The last topology level, specifying the CPU cores. > > This patch series only uses the two lower levels sockets and cores. > > To get the information on the topology, S390 provides the STSI > instruction, which stores a structures providing the list of the > containers used in the Machine topology: the SYSIB. > A selector within the STSI instruction allow to chose how many topology > levels will be provide in the SYSIB. > > Using the Topology List Entries (TLE) provided inside the SYSIB we > the Linux kernel is able to compute the information about the cache > distance between two cores and can use this information to take > scheduling decisions. Do the socket, book, ... metaphors and looking at STSI from the existing smp infrastructure even make sense? STSI 15.1.x reports the topology to the guest and for a virtual machine, this topology can be very dynamic. So a CPU can move from from one topology container to another, but the socket of a cpu changing while it's running seems a bit strange. And this isn't supported by this patch series as far as I understand, the only topology changes are on hotplug.