Re: [PATCH RFC 00/10] qemu: Enable SCHED_CORE for domains and helper processes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 5/26/22 16:00, Dario Faggioli wrote:
> On Thu, 2022-05-26 at 14:01 +0200, Dario Faggioli wrote:
>> Thoughts?
>>
> Oh, and there are even a couple of other (potential) use case, for
> having an (even more!) fine grained control of core-scheduling.
> 
> So, right now, giving a virtual topology to a VM, pretty much only
> makes sense if the VM has its vcpus pinned. Well, actually, there's
> something that we can do even if that is not the case, especially if we
> define at least *some* constraints on where the vcpus can run, even if
> we don't have strict and static 1-to-1 pinning... But for sure we
> shouldn't define an SMT topology, if we don't have that (i.e., if we
> don't have strict and static 1-to-1 pinning). And yet, the vcpus will
> run on cores and threads!
> 
> Now, if we implement per-vcpu core-scheduling (which means being able
> to put not necessarily whole VMs, but single vcpus [although, of the
> same VM], in trusted groups), then we can:
> - put vcpu0 and vcpu1 of VM1 in a group
> - put vcpu2 and vcpu3 of VM1 in a(nother!) group
> - define, in the virtual topology of VM1, vcpu0 and vcpu1 as
>   SMT-threads of the same core
> - define, in the virtual topology of VM1, vcpu2 and vcpu3 as
>   SMT-threads of the same core

These last two we can't do ourselves really. It has to come from domain
definition. Otherwise we might break guest ABI, because unless
configured in domain XML all vCPUs are different cores (e.g.
<vcpu>4</vcpu> gives you a 4 core vCPU).

What we could do is to utilize cpu topology, regardless of pinning. I
mean, for the following config:

  <vcpu>4</vcpu>
  <cpu>
    <topology sockets='1' dies='1' cores='2' threads='2'/>
  </cpu>

which gives you two threads in two cores. Now, we could place threads of
one core into one group, and the other two threads of the other core
into another group.

Ideally, I'd like to avoid computing an intersection with pinning
because that will get hairy pretty quickly (as you demonstrated in this
e-mail). For properly pinned vCPUs this won't be any performance penalty
(yes, it's still possible to come up with an artificial counter
example), and for "misconfigured" pinning, well tough luck.

Michal




[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]

  Powered by Linux