[PATCH v2 0/3] xfs: run eofblocks scan on ENOSPC

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

Here's v2 of the eofblocks scan on ENOSPC series, incorporating feedback
from v1:

http://oss.sgi.com/archives/xfs/2014-03/msg00388.html

The major change here is to simplify the error checking logic and tie
the eofblocks scan to the inode flush in the ENOSPC scenario. I've done
some high-level testing that doesn't seem to elicit any sort of
pathological behavior given the circumstances (i.e., performance will
never be ideal as we head into ENOSPC).

I tested on a hacked filesystem that makes preallocation persistent (no
trim on close), disables preallocation throttling and set the background
scanner to a high value to create worst case conditions. I ran an
fs_mark workload to create 64k 1MB files. Then, started 16x8GB
sequential dd writers expected to hit ENOSPC. This is on a 16xcpu box
with 32GB RAM and a 200GB fs (with agcounts of 32 and 1024).

Via tracepoints, I generally observe that the inode flush acts as a
filter to prevent many threads from entering into eofblocks scans at
once. E.g., by the time the first handful of threads make it through a
scan, they and/or others have dirtied more data for the remaining queued
up inode flushers to work with. I notice some occasional spikes in
kworkers or rcu processing, but nothing for longer than a couple seconds
or so.

A downside I've noticed with this logic is that once one thread runs a
scan and makes it through this retry sequence, it has a better chance to
allocate more of the recently freed space than the others, all of which
might have queued on the inode flush lock by the time the first
flush/scan completes.

This leads to what one might consider "unfair" allocation across the set
of writers when we enter this scenario. E.g., I saw tests were some
threads were able to complete the 8GB write while others only made it to
2-3GB before the filesystem completely ran out of space. Given the
benefit of the series, I think this is something that can be potentially
enhanced incrementally if it turns out to be a problem in practice.

I also have an xfstests test I'm planning to post soon that verifies
lingering preallocations can be reclaimed in a reasonable manner before
returning ENOSPC.

Thoughts, reviews, flames appreciated.

Brian

v2:
- Drop flush mechanism during eofblocks scan (along with prereq patch).
- Simplify scan logic on ENOSPC. Separate EDQUOT from ENOSPC and tie
  ENOSPC scan to inode flush.
- Eliminate unnecessary project quota handling from
  xfs_inode_free_quota_eofblocks() (ENOSPC is a separate path).

Brian Foster (3):
  xfs: add scan owner field to xfs_eofblocks
  xfs: run an eofblocks scan on ENOSPC/EDQUOT
  xfs: squash prealloc while over quota free space as well

 fs/xfs/xfs_dquot.h  | 15 ++++++++++++++
 fs/xfs/xfs_file.c   | 23 +++++++++++++++++----
 fs/xfs/xfs_icache.c | 59 ++++++++++++++++++++++++++++++++++++++++++++++++++++-
 fs/xfs/xfs_icache.h |  3 +++
 fs/xfs/xfs_iomap.c  | 20 ++++++++++++------
 5 files changed, 109 insertions(+), 11 deletions(-)

-- 
1.8.3.1

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs




[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux