On 08/10/15 07:28, Dave Chinner wrote:
On Wed, Oct 07, 2015 at 09:13:06PM +0300, Avi Kivity wrote:
On 07/10/15 18:13, Eric Sandeen wrote:
On 10/7/15 10:08 AM, Brian Foster wrote:
On Wed, Oct 07, 2015 at 09:24:15AM -0500, Eric Sandeen wrote:
On 10/7/15 9:18 AM, Gleb Natapov wrote:
Hello XFS developers,
We are working on scylladb[1] database which is written using seastar[2]
- highly asynchronous C++ framework. The code uses aio heavily: no
synchronous operation is allowed at all by the framework otherwise
performance drops drastically. We noticed that the only mainstream FS
in Linux that takes aio seriously is XFS. So let me start by thanking
you guys for the great work! But unfortunately we also noticed that
sometimes io_submit() is executed synchronously even on XFS.
Looking at the code I see two cases when this is happening: unaligned
IO and write past EOF. It looks like we hit both. For the first one we
make special afford to never issue unaligned IO and we use XFS_IOC_DIOINFO
to figure out what alignment should be, but it does not help. Looking at the
code though xfs_file_dio_aio_write() checks alignment against m_blockmask which
is set to be sbp->sb_blocksize - 1, so aio expects buffer to be aligned to
filesystem block size not values that DIOINFO returns. Is it intentional? How
should our code know what it should align buffers to?
/* "unaligned" here means not aligned to a filesystem block */
if ((pos & mp->m_blockmask) || ((pos + count) & mp->m_blockmask))
unaligned_io = 1;
It should be aligned to the filesystem block size.
I'm not sure exactly what kinds of races are opened if the above locking
were absent, but I'd guess it's related to the buffer/block state
management, block zeroing and whatnot that is buried in the depths of
the generic dio code.
Yep:
commit eda77982729b7170bdc9e8855f0682edf322d277
Author: Dave Chinner <dchinner@xxxxxxxxxx>
Date: Tue Jan 11 10:22:40 2011 +1100
xfs: serialise unaligned direct IOs
[...]
I fixed something similar in ext4 at the time, FWIW.
Makes sense.
Is there a way to relax this for reads?
The above mostly only applies to writes. Reads don't modify data so
racing unaligned reads against other reads won't given unexpected
results and so aren't serialised.
i.e. serialisation will only occur when:
- unaligned write IO will serialise until sub-block zeroing
is complete.
- write IO extending EOF will serialis until post-EOF
zeroing is complete
By "complete" here, do you mean that a call to truncate() returned, or
that its results reached the disk an unknown time later?
i could, immediately after truncating the file, extend it to a very
large size, and truncate it back just before the final fsync/close
sequence. This has downsides from the viewpoint of user support (why is
the file so large after a crash, what happens with backups) but is
better than nothing.
- cached pages are found on the inode (i.e. mixing
buffered/mmap access with direct IO).
We don't do that.
- truncate/extent manipulation syscall is run
Actually, we do call fallocate() ahead of io_submit() (in a worker
thread, in non-overlapping ranges) to optimize file layout and also in
the belief that it would reduce the amount of blocking io_submit() does.
Should we serialize the fallocate() calls vs. io_submit() (on the same
file)? Were those fallocates a good idea in the first place?
All other DIO will be issued and run concurrently, reads and writes.
Realistically, if you are care about performance (which obviously
you are) then you do not do unaligned IO, and you try hard to
minimise operations that extend the file...
On SSDs, if you care about performance you avoid random writes, which
cause write amplification. So you do have to extend the file, unless
you know its size in advance, which we don't.
Also, does "extend the file" here mean just the size, or extent
allocation as well?
A final point is discoverability. There is no way to discover safe
alignment for reads and writes, and which operations block io_submit(),
except by asking here, which cannot be done at runtime. Interfaces that
provide a way to query these attributes are very important to us.
_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs