On Tue, Sep 13, 2016 at 11:53:11AM +1000, Nicholas Piggin wrote: > - Application mmaps a file, faults in block 0 > - FS allocates block, creates mappings, syncs metadata, sets "no fsync" > flag for that block, and completes the fault. > - Application writes some data to block 0, completes userspace flushes > > * At this point, a crash must return with above data (or newer). > > - Application starts writing more stuff into block 0 > - Concurrently, fault in block 1 > - FS starts to allocate, splits trees including mappings to block 0 > > * Crash > > Is that right? How does your filesystem lose data before the sync > point? Witht all current file systems chances are your metadata hasn't been flushed out. You could write all metadata synchronously from the page fault handler, but that's basically asking for all kinds of deadlocks. > If there is any huge complexity or unsolved problem, it is in XFS. > Conceptual problem is simple. Good to have you back and make all the hard thing simple :) -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html