Re: cpugroups for hyperv hypervisor

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 7/9/2024 2:43 PM, Daniel P. Berrangé wrote:
On Wed, Jun 26, 2024 at 09:18:37AM -0500, Praveen K Paladugu wrote:
Hey folks,

My team is working on exposing `cpugroups` to Libvirt while using 'hyperv'
hypervisor with cloud-hypervisor(VMM). cpugroups are relevant in a specific
configuration of hyperv called 'minroot'. In Minroot configuration,
hypervisor artificially restricts Dom0 to run on a subset of cpus (Logical
Processors). The rest of the cpus can be assigned to guests.

cpugroups manage the CPUs assigned to guests and their scheduling
properties. Initially this looks similar to `cpuset` (in cgroups), but the
controls available with cpugroups don't map easily to those in cgroups. For
example:

* "IdleLPs" are the number of Logical Processors in a cpugroup, that should
be reserved to a guest even if they are idle

Are you saying that "IdleLPs" are host CPUs that are reserved for
a guest, but which are NOT currently going to be used for running
any virtual guest CPUs ?

No. These are host CPUs that are reserved to a guest. Even if the guest were Idle, they will still remain reserved to the guest. No other guest can be run on these host CPUs.


At what point do IdleLPs become used (non-idle) by the guest ?

If the guest, to which the host CPUs were assigned becomes active, the assigned LPs (host CPUs) will be used again.


* "SchedulingPriority", the priority(values between 0..7) with which to
schedule CPUs in a cpugroup.

We currently have

     <vcpusched vcpus='0-4,^3' scheduler='fifo' priority='1'/>

and 'SchedulePriority' would be conceptually mapping to the 'priority'
value.

It sounds like you're saying that the priority applies to /all/
CPUs in the cpugroup. IOW, if we were to re-use <vcpusched> for
this, we would have to require that the 'vcpus' mask always
covers every CPU in the cpugroup.

Yes. SchedulingPriority is cpugroup level property. This applies to all CPUs in the cpugroup.


It is probably better to just declare a new global element:

    <cputune>
       <priority>0..7</priority>
    </cputune>

since we've got precedent there with global elements for
<shares>, <period>, <quota>, etc setting overall VM policy,
which can option be refined per-vCPU by other elements.


As controls like above don't easily map to anything in cgroups, using a
driver specific element in Domain xml, to configure cpugroups seems like a
right approach. For example:

I think our general view is that tunable parameters in general are
almost entirely driver specific.

We provide a generic API framework for tunables using the virTypedParameter
arrays. The named tunables listed within the parameter array though, will
generally be different per-driver. Similarly we have the general <cputune>
element, but stuff within that is often per-driver.

By "tunables listed within the parameter array though" do you mean parameters passed to virsh invocations like "virsh memtune/schedinfo" or something else?


I confirmed that <cputune> element can be handled in per-driver basis, without affecting other drivers. So, to address this case, extending cputune with new elements like below make sense to me:

<cputune>
   <cpugroup_idlelps value='4'/>
   <cpugroup_priority value='6'/>
</cputune>

I suggest this to keep cpugroup related tunables separate from the rest. This allow us to document these tunables, without mixing them with existing tunables.

For the time being, we need to handle 3 cpugroup tunables. There could be more tunables in the future.

1) idlelps: Number of LPs reserved to a cpugroup
2) cpuCap: CPU Capacity, from assigned LPs, to be assigned to a cpugroup
3) schedulingPriority: Priority with which to schedule a cpugroup



If there are some parameters which are common to many drivers that's a
bonus, but I wouldn't say that is a required expectation.

IOW, I don't expect the cloud hypervisor driver to use a custom XML
namespace for this tasks.  We should define new XML elements and/or
virTypedParameter constant names as needed, and re-use existing stuff
where sensible.

Agreed. Hope above suggestion makes sense to you.




<ch:cpugroups>
   <idle_lps value='4'/>
     <scheduling_priority value='6'/>
</ch:cpugroups>

As cpugroups is only relevant while using minroot configuration on hyperv, I
don't see any value in generalizing this setting. So, having some "ch"
driver specific settings seems like a good approach to implement this
feature.

Question1: Do you see any concerns with this approach?


The cpugroup settings can be applied/modified using sysfs interface or using
a cmdline tool on the host. I see Libvirt uses both these mechanisms for
various use cases. But, given a choice, sysfs based interface seems like a
simpler approach to me. With sysfs interface Libvirt does not have to take
install time dependencies on new tools

Question2: Of "sysfs" vs "cmdline tool" which is preferred, given a choice?

Directly using sysfs is preferrable. It has lower overhead, and we can see
directly what fails allowing clearer error reporting when needed. sysfs is
simple enough that spawning a cmdline tool doesn't reduce our work, and if
anything increases the work.

Understood. Thanks!


With regards,
Daniel

Thanks for your response Daniel. I have been out for the past 3 weeks and so couldn't get back to you earlier.

--
Regards,
Praveen




[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]

  Powered by Linux