[private] Re: [PATCH v5 00/16] Introduce virtio-mem <memory/> model

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 13.09.21 16:52, Michal Privoznik wrote:
v4 of:

https://listman.redhat.com/archives/libvir-list/2021-June/msg00679.html

diff to v4:
- Rebased onto current master
- Worked in David's suggestions, e.g. rename from <actual/> to
   <current/>, implemented offline memory update, implemented --node
   argument to virsh update-memory-device, prealloc is OFF and reserve is
   ON for virtio-mem

Some suggestions are left as future work. For instance:
- Don't require memory slots because virtio-mem lives on PCI bus anyway
- Allow path backed backend for virtio-mem
- support .prealloc for virtio-mem object (not memory-backend-* !)


I keep occasionally rebased version on my gitlab:

https://gitlab.com/MichalPrivoznik/libvirt/-/commits/virtio_mem_v5/


Hi Michal,

I noticed one minor thing:

If I start a VM with

    <numa>
      <cell id='0' cpus='0-7' memory='2097152' unit='KiB'/>
      <cell id='1' cpus='8-15' memory='2097152' unit='KiB'/>
    </numa>
    ...
    <memory model='virtio-mem'>
      <target>
        <size unit='KiB'>16777216</size>
        <node>0</node>
        <block unit='KiB'>2048</block>
        <requested unit='KiB'>2097152</requested>
      </target>
      <alias name='ua-virtiomem0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </memory>
    <memory model='virtio-mem'>
      <target>
        <size unit='KiB'>16777216</size>
        <node>1</node>
        <block unit='KiB'>2048</block>
        <requested unit='KiB'>2097152</requested>
      </target>
      <alias name='ua-virtiomem1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </memory>


I get after it booted up

  <maxMemory slots='2' unit='KiB'>41943040</maxMemory>
  <memory unit='KiB'>37748736</memory>
  <currentMemory unit='KiB'>9437184</currentMemory>
    <memory model='virtio-mem'>
      <target>
        <size unit='KiB'>16777216</size>
        <node>0</node>
        <block unit='KiB'>2048</block>
        <requested unit='KiB'>2097152</requested>
        <current unit='KiB'>131072</current>
      </target>
      <alias name='ua-virtiomem0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </memory>
    <memory model='virtio-mem'>
      <target>
        <size unit='KiB'>16777216</size>
        <node>1</node>
        <block unit='KiB'>2048</block>
        <requested unit='KiB'>2097152</requested>
        <current unit='KiB'>2097152</current>
      </target>
      <alias name='ua-virtiomem1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </memory>

Note the "<current unit='KiB'>131072</current>". Inside of the guest, I can see that it really is 2G.

If I trigger a new "virsh update-memory-device Fedora34", it gets updated
properly.

I assume the devices get initialized in the guest in parallel. Could it be that
libvirt gets confused when there are concurrent notifications about
two devices or could it be QEMU accidentally swallows some events? I'll investigate the
latter regarding rate limiting the output in QEMU, maybe something is messed up.


--
Thanks,

David / dhildenb




[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]

  Powered by Linux