Re: Atomic non-durable file write API

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 28 Dec 2010 23:10:51 +0100 Olaf van der Spek <olafvdspek@xxxxxxxxx>
wrote:

> On Tue, Dec 28, 2010 at 9:59 PM, Neil Brown <neilb@xxxxxxx> wrote:
> >> Writing code is a lot of work and one should have the design clear
> >> before writing code, IMO.
> >
> > Yes and no.
> >
> > Having some design is obviously important before starting to code.
> > However it is a common experience that once you start writing code, you start
> > to see all the holes in your design - all the corner cases that you didn't
> > think about.  So sometimes writing some proof-of-concept code is a very
> > valuable step in the design process.
> 
> Sometimes, yes.
> 
> > I think the real disconnect here is that you haven't really established or
> > justified a need.
> 
> True, but all those exceptions (IMO) should be (proven to be) no problem.
> I'd prefer designs that don't have such exceptions. I may not be able
> to think of a concrete problem right now, but that doesn't mean such
> problems don't exist.

Very true.  But until such problems are described an understood, there is not
a lot of point trying to implement a solution.  Premature implementation,
like premature optimisation, is unlikely to be fruitful.  I know this from
experience.

> 
> I also don't understand why providing this feature is such a
> (performance) problem.
> Surely the people that claim this should be able to explain why.

Without a concrete design, it is hard to assess the performance impact.  I
would guess that those who anticipate a significant performance impact are
assuming a more feature-full implementation than you are, and they are
probably doing that because they feel that you need the extra features to
meet the actual needs (and so suggest those needs a best met by a DBMS rather
than a file-system).
Of course this is just guess work.  With concreted reference points it is
hard to be sure.

> 
> > You seem to be asking for the ability to atomically change the data in a file
> > without changing the metadata.  I cannot see why you would want this.  Maybe
> > you could give an explicit use-case??
> 
> Where losing meta-data is bad? That should be obvious.
> Or where losing file owner is bad? Still thinking about that one.

This is a bit left-field, but I think that losing metadata is always a good
thing.  A file should contain data - nothing else.  At all.  Owner and access
permissions should be based on location as dictated by external policy....
but yeah - off topic.

Clearly maintaining metadata by creating a new file and renaming in-place is
easy for root (chown/chmod/etc).  So you are presumably envisaging situations
where a non-root user has write access to a file that they don't own, and
they want to make an atomic data-update to that file.
Sorry, but I think that allowing non-owners to write to a file is a really
really bad idea and providing extra support for that use-case is completely
unjustifiable.

If you want multiple people to be able to update some data you should have
some way to ask the owner to make an update.  That could be:
  - setuid program
  - daemon which authenticates requests
  - distributed workflow tool like 'git' where you speak to the owner
    and ask them to pull updates.

and there are probably other options.  But un-mediated writes to a file you
don't own?  Just say NO!

NeilBrown


> 
> > Another significant issue here is "how much atomicity can we justify".
> > One possibility is for the file system not to provide any atomicity, and so
> > require lots of repair after a crash:  fsck for the filesystem, "make clean"
> > for your compile tree, removal of stray temp files etc for other subsystems.
> >
> > On the other extreme we could allow full transactions encompassing
> > multiple changes to multiple files which a guarantee to be either committed
> > completely or not at all after a crash.
> >
> > We gave up on the first extreme about a decade ago when journalling
> > filesystems became available for Linux.  There seems to be little desire to
> > pay the cost of ever implementing the other extreme in general purpose
> > filesystems.
> 
> Note that I'm not asking for this other extreme.
> 
> > So the important question is "Where on that spectrum of options should we be?"
> > The answer has to be based on cost/benefit.  The cost of adding journalling
> > was quite high, but the benefit of not having to fsck an enormous filesystem
> > after a crash is enormous, so it is a cost we have chosen to pay.
> >
> > If you want some extra level of atomicity, you need to demonstrate either a
> > high benefit or a low cost.  There seems to be some scepticism as to whether
> > you can.  A convincing use-case might demonstrate the high benefit.  Working
> > code might demonstrate low cost.  But you really need to provide at least one
> > (ideally both) or people are unlikely to take you seriously.
> 
> I understand.
> 
> Olaf

--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux