+static char *s390_top_set_level2(S390Topology *topo, char *p)
+{
+ int i, origin;
+
+ for (i = 0; i < topo->nr_sockets; i++) {
+ if (!topo->socket[i].active_count) {
+ continue;
+ }
+ p = fill_container(p, 1, i);
+ for (origin = 0; origin < S390_TOPOLOGY_MAX_ORIGIN; origin++) {
+ uint64_t mask = 0L;
+
+ mask = topo->socket[i].mask[origin];
+ if (mask) {
+ p = fill_tle_cpu(p, mask, origin);
+ }
+ }
+ }
+ return p;
+}
Why is it not possible to compute this topo information at "runtime",
when stsi is called, without maintaining state in an extra S390Topology
object ? Couldn't we loop on the CPU list to gather the topology bits
for the same result ?
It would greatly simplify the feature.
C.
The vCPU are not stored in order of creation in the CPU list and not in a topology order.
To be able to build the SYSIB we need an intermediate structure to reorder the CPUs per container.
We can do this re-ordering during the STSI interception but the idea was to keep this instruction as fast as possible.>
The second reason is to have a structure ready for the QEMU migration when we introduce vCPU migration from a socket to another socket, having then a different internal representation of the topology.
However, if as discussed yesterday we use a new cpu flag we would not need any special migration structure in the current series.
So it only stays the first reason to do the re-ordering preparation during the plugging of a vCPU, to optimize the STSI instruction.
If we think the optimization is not worth it or do not bring enough to be consider, we can do everything during the STSI interception.
Is it called on a hot code path ? AFAICT, it is only called once
per cpu when started. insert_stsi_3_2_2 is also a guest exit andit queries the machine definition in a very similar way.
Thanks,
C.