Re: [RFC PATCH v2 0/5] Paravirt Scheduling (Dynamic vcpu priority management)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jul 12, 2024, Mathieu Desnoyers wrote:
> On 2024-07-12 10:48, Sean Christopherson wrote:
> > > > I was looking at the rseq on request from the KVM call, however it does not
> > > > make sense to me yet how to expose the rseq area via the Guest VA to the host
> > > > kernel.  rseq is for userspace to kernel, not VM to kernel.
> > 
> > Any memory that is exposed to host userspace can be exposed to the guest.  Things
> > like this are implemented via "overlay" pages, where the guest asks host userspace
> > to map the magic page (rseq in this case) at GPA 'x'.  Userspace then creates a
> > memslot that overlays guest RAM to map GPA 'x' to host VA 'y', where 'y' is the
> > address of the page containing the rseq structure associated with the vCPU (in
> > pretty much every modern VMM, each vCPU has a dedicated task/thread).
> > 
> > A that point, the vCPU can read/write the rseq structure directly.
> 
> This helps me understand what you are trying to achieve. I disagree with
> some aspects of the design you present above: mainly the lack of
> isolation between the guest kernel and the host task doing the KVM_RUN.
> We do not want to let the guest kernel store to rseq fields that would
> result in getting the host task killed (e.g. a bogus rseq_cs pointer).

Yeah, exposing the full rseq structure to the guest is probably a terrible idea.
The above isn't intended to be a design, the goal is just to illustrate how an
rseq-like mechanism can be extended to the guest without needing virtualization
specific ABI and without needing new KVM functionality.

> But this is something we can improve upon once we understand what we
> are trying to achieve.
> 
> > 
> > The reason us KVM folks are pushing y'all towards something like rseq is that
> > (again, in any modern VMM) vCPUs are just tasks, i.e. priority boosting a vCPU
> > is actually just priority boosting a task.  So rather than invent something
> > virtualization specific, invent a mechanism for priority boosting from userspace
> > without a syscall, and then extend it to the virtualization use case.
> > 
> [...]
> 
> OK, so how about we expose "offsets" tuning the base values ?
> 
> - The task doing KVM_RUN, just like any other task, has its "priority"
>   value as set by setpriority(2).
> 
> - We introduce two new fields in the per-thread struct rseq, which is
>   mapped in the host task doing KVM_RUN and readable from the scheduler:
> 
>   - __s32 prio_offset; /* Priority offset to apply on the current task priority. */
> 
>   - __u64 vcpu_sched;  /* Pointer to a struct vcpu_sched in user-space */

Ideally, there won't be a virtualization specific structure.  A vCPU specific
field might make sense (or it might not), but I really want to avoid defining a
structure that is unique to virtualization.  E.g. a userspace doing M:N scheduling
can likely benefit from any capacity hooks/information that would benefit a guest
scheduler.  I.e. rather than a vcpu_sched structure, have a user_sched structure
(or whatever name makes sense), and then have two struct pointers in rseq.

Though I'm skeptical that having two structs in play would be necessary or sane.
E.g. if both userspace and guest can adjust priority, then they'll need to coordinate
in order to avoid unexpected results.  I can definitely see wanting to let the
userspace VMM bound the priority of a vCPU, but that should be a relatively static
decision, i.e. can be done via syscall or something similarly "slow".

>     vcpu_sched would be a userspace pointer to a new vcpu_sched structure,
>     which would be typically NULL except for tasks doing KVM_RUN. This would
>     sit in its own pages per vcpu, which takes care of isolation between guest
>     kernel and host process. Those would be RW by the guest kernel as
>     well and contain e.g.:
> 
>     struct vcpu_sched {
>         __u32 len;  /* Length of active fields. */
> 
>         __s32 prio_offset;
>         __s32 cpu_capacity_offset;
>         [...]
>     };
> 
> So when the host kernel try to calculate the effective priority of a task
> doing KVM_RUN, it would basically start from its current priority, and offset
> by (rseq->prio_offset + rseq->vcpu_sched->prio_offset).
> 
> The cpu_capacity_offset would be populated by the host kernel and read by the
> guest kernel scheduler for scheduling/migration decisions.
> 
> I'm certainly missing details about how priority offsets should be bounded for
> given tasks. This could be an extension to setrlimit(2).
> 
> Thoughts ?
> 
> Thanks,
> 
> Mathieu
> 
> -- 
> Mathieu Desnoyers
> EfficiOS Inc.
> https://www.efficios.com
> 




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux