Re: [PATCH v5 4/5] qemu: Add support to pin IOThreads to specific CPU

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 10, 2015 at 04:34:24PM -0400, John Ferlan wrote:
> 
> 
> On 03/10/2015 12:37 PM, Ján Tomko wrote:
> > On Tue, Mar 10, 2015 at 11:20:43AM -0400, John Ferlan wrote:
> >> Is your objection the comment?  Should it be striken or restated?
> >>
> > 
> > It's not the comment I have a problem with, it's the behavior that follows it.
> > 
> > Calling this API on a domain with a per-domain cpuset to pin an iothread
> > to all pCPUs will result in:
> > 
> > a) for AFFECT_LIVE
> >   The iothread will get pinned to all pCPUs
> >   The pininfo will get deleted from the definition. But missing pininfo
> >   does not mean all pCPUs if there is a per-domain cpuset.
> > 
> > b) for AFFECT_CONFIG
> >   The pininfo will be deleted, even though it does not match the
> >   cpumask.
> > 
> > I think the easiest 'fix' is to just drop all the 'doReset' functionality
> > and do not allow deleting pininfo with this API.
> > 
> > Alternatively, (since you already rejected
> > VIR_DOMAIN_CPU_PLACEMENT_MODE_AUTO above) you can pin the iothread to
> > the domain's cpuset.
> > 
> > Also, the full bitmap is ambiguous - does it means 'reset
> > it to default' or 'pin it to all pCPUs'?
> > 
> 
> Well let's just say I don't know all the nuances and known
> issues of cpu pinning... 
> 
> Knowing that the other APIs (vcpu and emulator) are broken - are
> you planning to send patches to fix them as well? Or is that
> something I should do mimicing whatever is settled upon here?
> 

Now I realized that it's impossible to pin a thread to all pCPUs if
its parent (the per-domain cpuset) does not have them all:

<vcpu placement='static' cpuset='0-1'>4</vcpu>
<iothreads>2</iothreads>
<cputune>
  <vcpupin vcpu='0' cpuset='0'/>
</cputune>

$ virsh vcpupin f21 1 r
error: Requested operation is not valid: failed to set cpuset.cpus in
cgroup for vcpu 1

So my diagnosis for case a) was wrong - it won't result in an inconsistent
configuration, because the pinning will fail before that.

As for b)
$ virsh vcpupin f21 1 r --config
Resulted in '0-3' being added to vcpu 1, instead of removing the
pinning. Apparently, the reset functionality only works if your host has
a number of CPUs divisible by 8, because we make the bitmap from
'maplen', which is 1, but it only has bits 0-3 set.

While changing the meaning of these values (full bitmap, empty bitmap)
for the vcpu APIs could be possible (historically, we've refused
accepting NULL for the public *Free APIs, so I don't know if removing
the errors here is safe), fixing them is not a prerequisite for iothread
pinning and I might get to it someday (TM).

> So, from how I read your comments I have the following diffs which 
> removes the doReset conditions, and removes the PinDel calls as a
> result of them. Theoretically now nothing calls the PinDel API and
> while I suppose I could remove it, once the hot remove an IOThread
> code is added, I'd need it back.
> 

I think we can keep the unused functions for a few releases, if you plan
to use them in the future.

> With respect to the pin to the domain cpuset - this API takes a new
> cpumap so I'm not sure I understand under what condition would thus
> use the domain's cpuset.  Maybe I read your comment too literally
> 

For the LIVE case, I thought the pinning would succeed and the domain
definition would not show any iothreadpin, even though it should show an
explicit pinning to all pCPUs.
For the CONFIG case, the domain cpuset will be used on next domain
start.

As for when the API itself should use the domain cpuset - when it's set
and it deletes the pininfo, i.e. does a reset -
to prevent inconsistencies for the AFFECT_CONFIG case and to actually
work for the AFFECT_LIVE case.

> diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
> index e3d0acf..74440f1 100644
> --- a/src/qemu/qemu_driver.c
> +++ b/src/qemu/qemu_driver.c
> @@ -5773,7 +5773,6 @@ qemuDomainPinIOThread(virDomainPtr dom,
>      virCapsPtr caps = NULL;
>      virDomainDefPtr persistentDef = NULL;
>      virBitmapPtr pcpumap = NULL;
> -    bool doReset = false;
>      qemuDomainObjPrivatePtr priv;
>      virDomainVcpuPinDefPtr *newIOThreadsPin = NULL;
>      size_t newIOThreadsPinNum = 0;
> @@ -5823,12 +5822,6 @@ qemuDomainPinIOThread(virDomainPtr dom,
>          goto endjob;
>      }
>  
> -    /* pinning to all physical cpus means resetting,
> -     * so check if we can reset setting.
> -     */
> -    if (virBitmapIsAllSet(pcpumap))
> -        doReset = true;
> -
>      if (flags & VIR_DOMAIN_AFFECT_LIVE) {
>  
>          if (priv->iothreadpids == NULL) {
> @@ -5891,17 +5884,13 @@ qemuDomainPinIOThread(virDomainPtr dom,
>              }
>          }
>  
> -        if (doReset) {
> -            virDomainIOThreadsPinDel(vm->def, iothread_id);
> -        } else {
> -            if (vm->def->cputune.iothreadspin)
> -                virDomainVcpuPinDefArrayFree(vm->def->cputune.iothreadspin,
> -                                             vm->def->cputune.niothreadspin);
> +        if (vm->def->cputune.iothreadspin)
> +            virDomainVcpuPinDefArrayFree(vm->def->cputune.iothreadspin,
> +                                         vm->def->cputune.niothreadspin);
>  
> -            vm->def->cputune.iothreadspin = newIOThreadsPin;
> -            vm->def->cputune.niothreadspin = newIOThreadsPinNum;
> -            newIOThreadsPin = NULL;
> -        }
> +        vm->def->cputune.iothreadspin = newIOThreadsPin;
> +        vm->def->cputune.niothreadspin = newIOThreadsPinNum;
> +        newIOThreadsPin = NULL;
>           if (virDomainSaveStatus(driver->xmlopt, cfg->stateDir, vm) < 0)
>              goto endjob;
> @@ -5930,24 +5919,20 @@ qemuDomainPinIOThread(virDomainPtr dom,
>              goto endjob;
>          }
>  
> -        if (doReset) {
> -            virDomainIOThreadsPinDel(persistentDef, iothread_id);
> -        } else {
> -            if (!persistentDef->cputune.iothreadspin) {
> -                if (VIR_ALLOC(persistentDef->cputune.iothreadspin) < 0)
> -                    goto endjob;
> -                persistentDef->cputune.niothreadspin = 0;
> -            }
> -            if (virDomainIOThreadsPinAdd(&persistentDef->cputune.iothreadspin,
> -                                         &persistentDef->cputune.niothreadspin,
> -                                         cpumap,
> -                                         maplen,
> -                                         iothread_id) < 0) {
> -                virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
> -                               _("failed to update or add iothreadspin xml "
> -                                 "of a persistent domain"));
> +        if (!persistentDef->cputune.iothreadspin) {
> +            if (VIR_ALLOC(persistentDef->cputune.iothreadspin) < 0)
>                  goto endjob;
> -            }
> +            persistentDef->cputune.niothreadspin = 0;
> +        }
> +        if (virDomainIOThreadsPinAdd(&persistentDef->cputune.iothreadspin,
> +                                     &persistentDef->cputune.niothreadspin,
> +                                     cpumap,
> +                                     maplen,
> +                                     iothread_id) < 0) {
> +            virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
> +                           _("failed to update or add iothreadspin xml "
> +                             "of a persistent domain"));
> +            goto endjob;
>          }
>  
>          ret = virDomainSaveConfig(cfg->configDir, persistentDef);
> 

ACK to the patch with this diff squashed in.

Jan

Attachment: signature.asc
Description: Digital signature

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list

[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]