Re: Atomic non-durable file write API

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Dec 28, 2010 at 9:59 PM, Neil Brown <neilb@xxxxxxx> wrote:
>> Writing code is a lot of work and one should have the design clear
>> before writing code, IMO.
>
> Yes and no.
>
> Having some design is obviously important before starting to code.
> However it is a common experience that once you start writing code, you start
> to see all the holes in your design - all the corner cases that you didn't
> think about. ÂSo sometimes writing some proof-of-concept code is a very
> valuable step in the design process.

Sometimes, yes.

> I think the real disconnect here is that you haven't really established or
> justified a need.

True, but all those exceptions (IMO) should be (proven to be) no problem.
I'd prefer designs that don't have such exceptions. I may not be able
to think of a concrete problem right now, but that doesn't mean such
problems don't exist.

I also don't understand why providing this feature is such a
(performance) problem.
Surely the people that claim this should be able to explain why.

> You seem to be asking for the ability to atomically change the data in a file
> without changing the metadata. ÂI cannot see why you would want this. ÂMaybe
> you could give an explicit use-case??

Where losing meta-data is bad? That should be obvious.
Or where losing file owner is bad? Still thinking about that one.

> Another significant issue here is "how much atomicity can we justify".
> One possibility is for the file system not to provide any atomicity, and so
> require lots of repair after a crash: Âfsck for the filesystem, "make clean"
> for your compile tree, removal of stray temp files etc for other subsystems.
>
> On the other extreme we could allow full transactions encompassing
> multiple changes to multiple files which a guarantee to be either committed
> completely or not at all after a crash.
>
> We gave up on the first extreme about a decade ago when journalling
> filesystems became available for Linux. ÂThere seems to be little desire to
> pay the cost of ever implementing the other extreme in general purpose
> filesystems.

Note that I'm not asking for this other extreme.

> So the important question is "Where on that spectrum of options should we be?"
> The answer has to be based on cost/benefit. ÂThe cost of adding journalling
> was quite high, but the benefit of not having to fsck an enormous filesystem
> after a crash is enormous, so it is a cost we have chosen to pay.
>
> If you want some extra level of atomicity, you need to demonstrate either a
> high benefit or a low cost. ÂThere seems to be some scepticism as to whether
> you can. ÂA convincing use-case might demonstrate the high benefit. ÂWorking
> code might demonstrate low cost. ÂBut you really need to provide at least one
> (ideally both) or people are unlikely to take you seriously.

I understand.

Olaf
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux