On Thu, Jan 31, 2019 at 11:13 PM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote: > > On Tue, Jan 29, 2019 at 08:26:43AM +1100, Dave Chinner wrote: > > Really, though, for this use case it's make more sense to have "per > > file freeze" semantics. i.e. if you want a consistent backup image > > on snapshot capable storage, the process is usually "freeze > > filesystem, snapshot fs, unfreeze fs, do backup from snapshot, > > remove snapshot". We can already transparently block incoming > > writes/modifications on files via the freeze mechanism, so why not > > just extend that to per-file granularity so writes to the "very > > large read-mostly file" block while it's being backed up.... > > > > Indeed, this would probably only require a simple extension to > > FIFREEZE/FITHAW - the parameter is currently ignored, but as defined > > by XFS it was a "freeze level". Set this to 0xffffffff and then it > > freezes just the fd passed in, not the whole filesystem. > > Alternatively, FI_FREEZE_FILE/FI_THAW_FILE is simple to define... > > This sounds like you want a lease (aka oplock), which we already have > implemented. Yes, its possibly true. I think that it could make sense to skip the reflink optimization for files that are open for write in our workloads. I'll need to check with my peers. Thanks, Amir.