Re: [Qemu-devel] [PATCH v2] numa: warn if numa 'mem' option or default RAM splitting between nodes is used.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 19 Mar 2019 14:51:07 +0000
Daniel P. Berrangé <berrange@xxxxxxxxxx> wrote:

> On Tue, Mar 19, 2019 at 02:08:01PM +0100, Igor Mammedov wrote:
> > On Thu, 7 Mar 2019 10:07:05 +0000
> > Daniel P. Berrangé <berrange@xxxxxxxxxx> wrote:
> >   
> > > On Wed, Mar 06, 2019 at 07:54:17PM +0100, Igor Mammedov wrote:  
> > > > On Wed, 6 Mar 2019 18:16:08 +0000
> > > > Daniel P. Berrangé <berrange@xxxxxxxxxx> wrote:
> > > >     
> > > > > On Wed, Mar 06, 2019 at 06:33:25PM +0100, Igor Mammedov wrote:    
> > > > > > Amend -numa option docs and print warnings if 'mem' option or default RAM
> > > > > > splitting between nodes is used. It's intended to discourage users from using
> > > > > > configuration that allows only to fake NUMA on guest side while leading
> > > > > > to reduced performance of the guest due to inability to properly configure
> > > > > > VM's RAM on the host.
> > > > > > 
> > > > > > In NUMA case, it's recommended to always explicitly configure guest RAM
> > > > > > using -numa node,memdev={backend-id} option.
> > > > > > 
> > > > > > Signed-off-by: Igor Mammedov <imammedo@xxxxxxxxxx>
> > > > > > ---
> > > > > >  numa.c          |  5 +++++
> > > > > >  qemu-options.hx | 12 ++++++++----
> > > > > >  2 files changed, 13 insertions(+), 4 deletions(-)
> > > > > > 
> > > > > > diff --git a/numa.c b/numa.c
> > > > > > index 3875e1e..42838f9 100644
> > > > > > --- a/numa.c
> > > > > > +++ b/numa.c
> > > > > > @@ -121,6 +121,8 @@ static void parse_numa_node(MachineState *ms, NumaNodeOptions *node,
> > > > > >  
> > > > > >      if (node->has_mem) {
> > > > > >          numa_info[nodenr].node_mem = node->mem;
> > > > > > +        warn_report("Parameter -numa node,mem is obsolete,"
> > > > > > +                    " use -numa node,memdev instead");    
> > > > > 
> > > > > My comments from v1 still apply. We must not do this as long as
> > > > > libvirt has no choice but to continue using this feature.    
> > > > It has a choice to use 'memdev' whenever creating a new VM and continue
> > > > using 'mem' with exiting VMs.    
> > > 
> > > Unfortunately we don't have such a choice. Libvirt has no concept of the
> > > distinction between an 'existing' and 'new' VM. It just receives an XML
> > > file from the mgmt application and with transient guests, we have no
> > > persistent configuration record of the VM. So we've no way of knowing
> > > whether this VM was previously running on this same host, or another
> > > host, or is completely new.  
> > In case of transient VM, libvirt might be able to use machine version
> > as deciding which option to use (memdev is around more than 4 years since 2.1)
> > (or QEMU could provide introspection into what machine version (not)supports,
> > like it was discussed before)
> > 
> > As discussed elsewhere (v1 tread|IRC), there are users (mainly CI) for which
> > fake NUMA is sufficient and they do not ask for explicit pinning, so libvirt
> > defaults to legacy -numa node,mem option.
> > Those users do not care no aware that they should use memdev instead
> > (I'm not sure if they are able to ask libvirt for non pinned numa memory
> > which results in memdev being used).
> > This patch doesn't obsolete anything yet, it serves purpose to inform users
> > that they are using legacy option and advises replacement option
> > so that users would know to what they should adapt to.
> > 
> > Once we deprecate and then remove 'mem' for new machines only (while keeping
> > 'mem' working on old machine versions). The new nor old libvirt won't be able
> > to start new machine type with 'mem' option and have to use memdev variant,
> > so we don't have migration issues with new machines and old ones continue
> > working with 'mem'.  
> 
> I'm not seeing what has changed which would enable us to deprecate
> something only for new machines. That's not possible from libvirt's
> POV as old libvirt will support new machines & thus we have to
> continue using "mem" for all machines in the scenarios where we
> currently use it. 
There are several issues here:
 1. how old libvirt you are talking about?

 2. old libvirt + new QEMU won't be able to start QEMU with
    new machine with 'mem' option so we don't have live migration,
    it's rather management issue where mgmt should not try to migrate
    to such host (if it manged to end up with not compatible package
    bundle it is not QEMU nor libvirt problem per se).

 3. in generic dropping features per machine or for all machines at once
    is the same, since there would be old libvirt that uses removed
    CLI option and it won't be able to start new QEMU with that option,
    even worse it would affect all machines. So we should agree on new
    reasonable deprecation period (if current one isn't sufficient)
    that would allow users to adapt to a breaking change.

 3. in case of downstream, it ships a compatible bundle and if user installs
    a QEMU from newer release without other new bits it would fall under
    unsupported category and the first thing support would tell to update
    other part along with QEMU. What I'm saying it's downstream distro job
    to organize upgrade path/track dependencies and backport/invent compat
    layer to earlier releases if necessary.
    So it's rather questionable if we should care about arbitrarily old
    libvirt with new QEMU in case of new machines (especially upstream).




--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list




[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]

  Powered by Linux