Re: I/O stalls when merging qcow2 snapshots on nfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Ben and Trond,

> On 5 May 2024, at 7:29, Thomas Glanzmann wrote paraphrased:

> When commiting 20 - 60 GB snapshots on kvm VMs which are stored on NFS I get 20
> seconds+ I/O stalls.

> When doing backups and migrations with kvm on NFS I get I/O stalls in
> the guest. How to avoid that?

* Benjamin Coddington <bcodding@xxxxxxxxxx> [2024-05-06 13:25]:
> What NFS version ends up getting mounted here?

NFS 4.2: (below output has already your's and Tronds options added)

172.31.0.1:/nfs on /mnt type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,nconnect=16,timeo=600,retrans=2,sec=sys,clientaddr=172.31.0.6,local_lock=none,write=eager,addr=172.31.0.1)

> You might eliminate some head-of-line blocking issues with the
> "nconnect=16" mount option to open additional TCP connections.

> My view of what could be happening is that the IO from your guest's process
> is congesting with the IO from your 'virsh blockcommit' process, and we
> don't currently have a great way to classify and queue IO from various
> sources in various ways.

thank you for reminding me of nconnect. I evaluated it with VMware ESX and saw
no benefit when benchmarking it with a single VM and dismissed it. But of
course it makes sense when having more than one concurrent I/O stream.

* Trond Myklebust <trondmy@xxxxxxxxxxxxxxx> [2024-05-06 15:47]:
> Two suggestions:
>    1. Try mounting the NFS partition on which these VMs reside with the
>       "write=eager" mount option. That ensures that the kernel kicks
>       off the write of the block immediately once QEMU has scheduled it
>       for writeback. Note, however that the kernel does not wait for
>       that write to complete (i.e. these writes are all asynchronous).
>    2. Alternatively, try playing with the 'vm.dirty_ratio' or
>       'vm.dirty_bytes' values in order to trigger writeback at an
>       earlier time. With the default value of vm.dirty_ratio=20, you
>       can end up caching up to 20% of your total memory's worth of
>       dirty data before the VM triggers writeback over that 1Gbit link.

Thank you for the option write=eager. I was not aware of that but I
often run into problems where a 10 Gbit/s network pipe fills up my
buffer cache and than tries to destage GB 128 GB * 0.2 - 25.6 GB to the
disk which can't keep in my case and resulting in long I/O stalls. Usually my
disks can take between 100 (synchronous replicated drbd link 200km) - 500 MB/s
(SATA SSDs). I tried to tell kernel to destage faster by
(vm.dirty_expire_centisecs=100) which improved some workloads but not all.

So, I think I found a solution to my problem by doing the following:

- Increase NFSD threads to 128:

cat > /etc/nfs.conf.d/storage.conf <<'EOF'
[nfsd]
threads = 128

[mountd]
threads = 8
EOF
echo 128 > /proc/fs/nfsd/threads

- Mount the nfs volume with -o nconnect=16,write=eager

- Use iothreads and cache=none.

  <iothreads>2</iothreads>
  <driver name='qemu' type='qcow2' cache='none' discard='unmap' iothread='1'/>

By doing the above I no longer see any I/O stalls longer than one second (in my
date loop 2 seconds time difference).

Thank you two again for helping me out with this.

Cheers,
	Thomas

PS: Cache=writethrough and without I/O threads the I/O stalls for the time blockcommit executes.




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux