Re: [PATCH] dio: track and serialise unaligned direct IO

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jul 30, 2010 at 10:43:09AM -0700, Badari Pulavarty wrote:
> On Fri, 2010-07-30 at 08:45 +1000, Dave Chinner wrote:
> > From: Dave Chinner <dchinner@xxxxxxxxxx>
> > 
> > If we get two unaligned direct IO's to the same filesystem block
> > that is marked as a new allocation (i.e. buffer_new), then both IOs will
> > zero the portion of the block they are not writing data to. As a
> > result, when the IOs complete there will be a portion of the block
> > that contains zeros from the last IO to complete rather than the
> > data that should be there.
> > 
> > This is easily manifested by qemu using aio+dio with an unaligned
> > guest filesystem - every IO is unaligned and fileystem corruption is
> > encountered in the guest filesystem. xfstest 240 (from Eric Sandeen)
> > is also a simple reproducer.
> > 
> > To avoid this problem, track unaligned IO that triggers sub-block zeroing and
> > check new incoming unaligned IO that require sub-block zeroing against that
> > list. If we get an overlap where the start and end of unaligned IOs hit the
> > same filesystem block, then we need to block the incoming IOs until the IO that
> > is zeroing the block completes. The blocked IO can then continue without
> > needing to do any zeroing and hence won't overwrite valid data with zeros.
> > 
> > Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
> 
> I can confirm that, it  fixes corruption  of my VM images when using AIO
> +DIO. (cache=none,aio=native). I haven't reviewed the patch closely, but
> 
> 1) can we do this only for AIO+DIO combination ? For regular DIO, we
> should have all the IOs serialized by i_mutex anyway..

Not for filesystems that do their own locking. In most cases XFS
does not take the i_mutiex during DIO writes, and when it does it
drops it long before we call into the generic direct IO code that
does the sub-block zeroing. So the i_mutex does not guarantee any
form of serialisation in direct IO writes at all.

> 2) Having a single global list (for all devices) might cause scaling
> issues.

Unaligned direct IO is undesirable in the first place. While we
shouldd behave correctly in this corner case, I0 don't see any need
for it to be particularly efficient as the real fix for performance
problems with unaligned DIO is to not issue it in the first place.

> 3) Are you dropping i_mutex when you are waiting for the zero-out to
> finish ?

For XFS we're not holding the i_mutex - and we can't take the
i_mutex either as that will cause lock inversion issues. We don't
know what locks are held, we don't know whether it is safe to drop
and take locks, we don't even have the context to operate on
filesystem specific locks to avoid ordering problems. If we can't
sleep with the locks we already have held at this point, the DIO is
already broken for that filesystem.

Besides, if the i_mutex is already held for some filesystem when we
zero blocks, then we can't very well have concurrent block zeroing
in progress, and therefore can't hit this bug, right?

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs


[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux