Re: [PATCHSET RFC v3 00/18] xfs: atomic file updates

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Apr 01, 2021 at 06:56:20AM +0300, Amir Goldstein wrote:
> On Thu, Apr 1, 2021 at 4:14 AM Darrick J. Wong <djwong@xxxxxxxxxx> wrote:
> >
> > Hi all,
> >
> > This series creates a new FIEXCHANGE_RANGE system call to exchange
> > ranges of bytes between two files atomically.  This new functionality
> > enables data storage programs to stage and commit file updates such that
> > reader programs will see either the old contents or the new contents in
> > their entirety, with no chance of torn writes.  A successful call
> > completion guarantees that the new contents will be seen even if the
> > system fails.
> >
> > User programs will be able to update files atomically by opening an
> > O_TMPFILE, reflinking the source file to it, making whatever updates
> > they want to make, and exchange the relevant ranges of the temp file
> > with the original file.  If the updates are aligned with the file block
> > size, a new (since v2) flag provides for exchanging only the written
> > areas.  Callers can arrange for the update to be rejected if the
> > original file has been changed.
> >
> > The intent behind this new userspace functionality is to enable atomic
> > rewrites of arbitrary parts of individual files.  For years, application
> > programmers wanting to ensure the atomicity of a file update had to
> > write the changes to a new file in the same directory, fsync the new
> > file, rename the new file on top of the old filename, and then fsync the
> > directory.  People get it wrong all the time, and $fs hacks abound.
> > Here is the proposed manual page:
> >
> 
> I like the idea of modernizing FIEXCHANGE_RANGE very much and
> I think that the improved implementation and new(?) flags will be very
> useful just the way you designed them, but maybe something to consider...
> 
> Taking a step back and ignoring the existing xfs ioctl, all the use cases
> that you listed actually want MOVE_RANGE not exchange range.
> No listed use case does anything with the old data except dump it in the
> trash bin. Right?

The three listed in the manpage don't do anything with the blocks.

However, there is usecase #4: online filesystem repair, where we want to
be able to construct a new metadata file/directory/xattr tree, exchange
the new contents with the old, and still have the old contents attached
to the file so that we can (very carefully) tear down the internal
buffer caches and other.  For /that/ use case, we require truncation to
be a separate step.

> I do realize that implementing atomic extent exchange was easier back
> when that ioctl was implemented for xfs and ext4 and I realize that
> deferring inode unlink was much simpler to implement than deferred
> extent freeing, but seeing how punch hole and dedupe range already
> need to deal with freeing target inode extents, it is not obvious to me that
> atomic freeing the target inode extents instead of exchange is a bad idea
> (given the appropriate opt-in flags).
> 
> Is there a good reason for keeping the "freeing old blocks with unlink"
> strategy the only option?

Making userspace take the extra step of deciding what to do with the
tempfile (and when!) after the operation reduces the amount of work that
has to be done in the hot path, since we know that the only work we need
to do is switch the mappings (and the reverse mappings).

If this became a move operation where we drop the file2 blocks, it would
be necessary to traverse the refcount btree to see if the blocks are
shared, update the refcount btree, and possibly update the free space
btrees as well.  The current design permits us to skip all that, which
is all the more useful if the operation is synchronous.

Consider also that inactivation of inodes will soon become a background
operation in XFS, which means that userspace soon won't even have to
wait for that part.

--D

> 
> Thanks,
> Amir.



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux