Re: Writeback Stalls

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Sep 19, 2012 at 06:36:11PM +0000, Markus Stockhausen wrote:
> Hello,
> 
> I'm not sure if you can help me with the following problem but I must start somewhere.
> 
> We have an small sized VMWare infrastructure. 8 hosts with round about 50 VMs. 
> Their images are hosted on 2 Ubuntu 12.04 NFS storage servers with each of them
> having a 14 disk RAID 6 array. On top of the array runs a single XFS filesystem with 
> round about 10TB of disk space.
> 
> From time to time we see stalls in the infrastructure. The machines become unresponsive
> and hang for a few seconds. The controller was the first item to be under suspect. 
> But after a lot examination we created a very artifical setup that shows the real reasons
> for the stalls: Writeback handling. The parameters are:
> 

So:

> - set NFS to async

Disable the NFS server-client writeback throttling control loop
(i.e. commits block until data is synced to disk). Also, data loss
on NFS server crash.

> - disable controller and disk writeback cache

IO is exceedingly slow.

> - enable cgroup
> - set dirty_background_bytes to some very high value (4GB)
> - set dirty_bytes to some very high value (4GB)
> - set dirty_expire_centisecs to to some very high value (60 secs)

Allow lots of dirty data in memory before writeback starts

> - set blkio.throttle.write_bps_device to a very low value (2MB/sek)

And throttle writeback to 2MB/s.

Gun. Foot. Point. Shoot.

> Now generate some write load on a Windows VM. During the test we observe what
> is happening on the storage and the VM. The result is:
> 
> - dirty pages are increasing
> - writeback is constantly 0
> - VM is working well

The VM is not "working well" with this configuration - it's building
up a large cache of dirty pages that you've limited to draining at a
very slow rate. IOWs, on an NFS server, having a writeback value of
0 is exactly the problem, and disabling the client-server throttling
feedback loop only makes it worse. You want writeback on the server
to start early, not delay it until you run out of memory.

> At some point writeback is kicking in and all of a sudden the VM stalls. During
> this time the setup shows
> 
> - most of the dirty pages are transfered to writeback pages
> - writeback is done at the above set limit (2MB/sek)
> - VM is not responding 

Because you ran out of clean pages and it takes forever to write
4GB of dirty data @ 2MB/s.

> After writeback has finished everything goes back to normal. Additional remark:
> VMs DO NOT hang if I create heavy writes on the XFS into non-VM related files.

Probably because they are not in the throttled cgroup that the NFS
daemons are in.

> We are interested in this kind of setup for several reasons:
> 
> 1. Keep VMs reponsive
> 
> 2. Allow VMs to generate short spikes of write I/Os at a higher rate than the 
> disk subsystem is capable of. 
> 
> 3. Write this data back to the disk in the background over a longer period
> of time. Ensure that a limited writeback rate keeps enough headroom so that
> read I/Os are not delayed too much.

Fundamentally, you are doing it all wrong. High throughput, low
latency NFS servers write dirty data to disk fast, not leave it
memory until you run out of clean memory because that causes
everything to block waiting for writeback IO completion to be ale to
free memory...

This really isn't an XFS problem.

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs


[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux