Re: [PATCH Part2 RFC v4 38/40] KVM: SVM: Provide support for SNP_GUEST_REQUEST NAE event

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 7/19/21 5:50 PM, Sean Christopherson wrote:
...

IIUC, this snippet in the spec means KVM can't restrict what requests are made
by the guests.  If so, that makes it difficult to detect/ratelimit a misbehaving
guest, and also limits our options if there are firmware issues (hopefully there
aren't).  E.g. ratelimiting a guest after KVM has explicitly requested it to
migrate is not exactly desirable.


The guest message page contains a message header followed by the encrypted payload. So, technically KVM can peek into the message header format to determine the message request type. If needed, we can ratelimit based on the message type.

In the current series we don't support migration etc so I decided to ratelimit unconditionally.

...

Now that KVM supports all the VMGEXIT NAEs required for the base SEV-SNP
feature, set the hypervisor feature to advertise it.

It would helpful if this changelog listed the Guest Requests that are required
for "base" SNP, e.g. to provide some insight as to why we care about guest
requests.


Sure, I'll add more.


  static int snp_bind_asid(struct kvm *kvm, int *error)
@@ -1618,6 +1631,12 @@ static int snp_launch_start(struct kvm *kvm, struct kvm_sev_cmd *argp)
  	if (rc)
  		goto e_free_context;
+ /* Used for rate limiting SNP guest message request, use the default settings */
+	ratelimit_default_init(&sev->snp_guest_msg_rs);

Is this exposed to userspace in any way?  This feels very much like a knob that
needs to be configurable per-VM.


It's not exposed to the userspace and I am not sure if userspace care about this knob.


Also, what are the estimated latencies of a guest request?  If the worst case
latency is >200ms, a default ratelimit frequency of 5hz isn't going to do a whole
lot.


The latency will depend on what else is going in the system at the time the request comes to the hypervisor. Access to the PSP is serialized so other parallel PSP command execution will contribute to the latency.

...
+
+	if (!__ratelimit(&sev->snp_guest_msg_rs)) {
+		pr_info_ratelimited("svm: too many guest message requests\n");
+		rc = -EAGAIN;

What guarantee do we have that the guest actually understands -EAGAIN?  Ditto
for -EINVAL returned by snp_build_guest_buf().  AFAICT, our options are to return
one of the error codes defined in "Table 95. Status Codes for SNP_GUEST_REQUEST"
of the firmware ABI, kill the guest, or ratelimit the guest without returning
control to the guest.


Yes, let me look into passing one of the status code defined in the spec.

+		goto e_fail;
+	}
+
+	rc = snp_build_guest_buf(svm, &data, req_gpa, resp_gpa);
+	if (rc)
+		goto e_fail;
+
+	sev = &to_kvm_svm(kvm)->sev_info;
+
+	mutex_lock(&kvm->lock);

Question on the VMPCK sequences.  The firmware ABI says:

    Each guest has four VMPCKs ... Each message contains a sequence number per
    VMPCK. The sequence number is incremented with each message sent. Messages
    sent by the guest to the firmware and by the firmware to the guest must be
    delivered in order. If not, the firmware will reject subsequent messages ...

Does that mean there are four independent sequences, i.e. four streams the guest
can use "concurrently", or does it mean the overall freshess/integrity check is
composed from four VMPCK sequences, all of which must be correct for the message
to be valid?


There are four independent sequence counter and in theory guest can use them concurrently. But the access to the PSP must be serialized. Currently, the guest driver uses the VMPCK0 key to communicate with the PSP.


If it's the latter, then a traditional mutex isn't really necessary because the
guest must implement its own serialization, e.g. it's own mutex or whatever, to
ensure there is at most one request in-flight at any given time.

The guest driver uses the its own serialization to ensure that there is *exactly* one request in-flight.

The mutex used here is to protect the KVM's internal firmware response buffer.


And on the KVM
side it means KVM can simpy reject requests if there is already an in-flight
request.  It might also give us more/better options for ratelimiting?


I don't think we should be running into this scenario unless there is a bug in the guest kernel. The guest kernel support and CCP driver both ensure that request to the PSP is serialized.

In normal operation we may see 1 to 2 quest requests for the entire guest lifetime. I am thinking first request maybe for the attestation report and second maybe to derive keys etc. It may change slightly when we add the migration command; I have not looked into a great detail yet.

thanks




[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]

  Powered by Linux