Re: [PATCH v6 2/8] target/s390x: add zpci-interp to cpu models

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 6/1/22 10:10 AM, David Hildenbrand wrote:
On 01.06.22 15:48, Matthew Rosato wrote:
On 6/1/22 5:52 AM, David Hildenbrand wrote:
On 24.05.22 21:02, Matthew Rosato wrote:
The zpci-interp feature is used to specify whether zPCI interpretation is
to be used for this guest.

We have

DEF_FEAT(SIE_PFMFI, "pfmfi", SCLP_CONF_CHAR_EXT, 9, "SIE: PFMF
interpretation facility")

and

DEF_FEAT(SIE_SIGPIF, "sigpif", SCLP_CPU, 12, "SIE: SIGP interpretation
facility")


Should we call this simply "zpcii" or "zpciif" (if the official name
includes "Facility")


This actually controls the use of 2 facilities which really only make
sense together - Maybe just zpcii


Signed-off-by: Matthew Rosato <mjrosato@xxxxxxxxxxxxx>
---
   hw/s390x/s390-virtio-ccw.c          | 1 +
   target/s390x/cpu_features_def.h.inc | 1 +
   target/s390x/gen-features.c         | 2 ++
   target/s390x/kvm/kvm.c              | 1 +
   4 files changed, 5 insertions(+)

diff --git a/hw/s390x/s390-virtio-ccw.c b/hw/s390x/s390-virtio-ccw.c
index 047cca0487..b33310a135 100644
--- a/hw/s390x/s390-virtio-ccw.c
+++ b/hw/s390x/s390-virtio-ccw.c
@@ -806,6 +806,7 @@ static void ccw_machine_7_0_instance_options(MachineState *machine)
       static const S390FeatInit qemu_cpu_feat = { S390_FEAT_LIST_QEMU_V7_0 };
ccw_machine_7_1_instance_options(machine);
+    s390_cpudef_featoff_greater(14, 1, S390_FEAT_ZPCI_INTERP);
       s390_set_qemu_cpu_model(0x8561, 15, 1, qemu_cpu_feat);
   }
diff --git a/target/s390x/cpu_features_def.h.inc b/target/s390x/cpu_features_def.h.inc
index e86662bb3b..4ade3182aa 100644
--- a/target/s390x/cpu_features_def.h.inc
+++ b/target/s390x/cpu_features_def.h.inc
@@ -146,6 +146,7 @@ DEF_FEAT(SIE_CEI, "cei", SCLP_CPU, 43, "SIE: Conditional-external-interception f
   DEF_FEAT(DAT_ENH_2, "dateh2", MISC, 0, "DAT-enhancement facility 2")
   DEF_FEAT(CMM, "cmm", MISC, 0, "Collaborative-memory-management facility")
   DEF_FEAT(AP, "ap", MISC, 0, "AP instructions installed")
+DEF_FEAT(ZPCI_INTERP, "zpci-interp", MISC, 0, "zPCI interpretation")

How is this feature exposed to the guest, meaning, how can the guest
sense support?

Just a gut feeling: does this toggle enable the host to use
interpretation and the guest cannot really determine the difference
whether it's enabled or not? Then, it's not a guest CPU feature. But
let's hear first what this actually enables :)

This has changed a few times, but collectively we can determine on the
host kernel if it is allowable based upon the availability of certain
facility/sclp bits + the availability of an ioctl interface.

If all of these are available, the host kernel allows zPCI
interpretation, with userspace able to toggle it on/off for the guest
via this feature.  When allowed and enabled, 2 ECB bits then get set for
each guest vcpu that enable the associated facilities.  The guest
continues to use zPCI instructions in the same manner as before; the
function handles it receives from CLP instructions will look different
but are still used in the same manner.

We don't yet add vsie support of the facilities with this series, so the
corresponding facility and sclp bits aren't forwarded to the guest.

That's exactly my point:

sigpif and pfmfi are actually vsie features. I'd have expected that
zpcii would be a vsie feature as well.

If interpretation is really more an implementation detail in the
hypervisor to implement zpci, than an actual guest feature (meaning, the
guest is able to observe it as if it were a real CPU feature), then we
most probably want some other way to toggle it (maybe via the machine?).

Example: KVM uses SIGP interpretation based on availability. However, we
don't toggle it via sigpif. sigpif actually tells the guest that it can
use the SIGP interpretation facility along with vsie.

You mention "CLP instructions will look different", I'm not sure if that
should actually be handled via the CPU model. From my gut feeling, zpcii
should actually be the vsie zpcii support to be implemented in the future.


Well, what I meant was that the CLP response data looks different, primarily because when interpretation is enabled the guest would get passthrough of the function handle (which in turn has bits turned off that force hypervisor intercepts) rather than one that QEMU fabricated.

As far as a machine option, well we still need a mechanism by which userspace can decide whether it's OK to enable interpretation in the first place. I guess we can take advantage of the fact that the capability associated with the ioctl interface can indicate both that the kernel interface is available + all of the necessary hardware facilities are available to that host kernel.

So I guess we could use that to make a decision to default a machine setting based upon that (yes if everything is available, no if not).


So I wonder if we could simply always enable zPCI interpretation if
HW+kernel support is around and we're on a new compat machine? I there
is a way that migration could break (from old kernel to new kernel),
we'd have to think about alternatives.

zpci devices are currently marked unmigratable, so if you want to migrate you need to detach all of them first anyway today.



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux