Re: Atomic non-durable file write API

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 28 Dec 2010 18:22:42 +0100 Olaf van der Spek <olafvdspek@xxxxxxxxx>
wrote:

> On Mon, Dec 27, 2010 at 8:30 PM, Christian Stroetmann
> <stroetmann@xxxxxxxxxxxxx> wrote:
> > Btw.: There is even no analogy: "The concepts are the same".
> 
> So? Doesn't mean you implement full ACID DB-like transactions.
> 
> >>>> Still waiting on any hint for why that performance loss would happen.
> >>>
> >>> > From my point of view, the loss of performance depends on what is
> >>> benchmarked in which way.
> >>
> >> Maybe, but still no indication of why.
> >
> > If you have a solution, then you really should show other persons the
> > working source code.
> 
> I don't have source code.
> Are you not capable of reasoning about something without having
> concrete source code?
> 
> > For me speaking: I like such technologies and I'm also interested in your
> > attempts.
> 
> Writing code is a lot of work and one should have the design clear
> before writing code, IMO.

Yes and no.

Having some design is obviously important before starting to code.
However it is a common experience that once you start writing code, you start
to see all the holes in your design - all the corner cases that you didn't
think about.  So sometimes writing some proof-of-concept code is a very
valuable step in the design process.
Then of course you need to do some testing to see if in the code actually
performs as hoped or expected.  That testing may cause the design to be
revised.
So asking for code early is not necessarily a bad thing.

I think the real disconnect here is that you haven't really established or
justified a need.

You seem to be asking for the ability to atomically change the data in a file
without changing the metadata.  I cannot see why you would want this.  Maybe
you could give an explicit use-case??

Another significant issue here is "how much atomicity can we justify".
One possibility is for the file system not to provide any atomicity, and so
require lots of repair after a crash:  fsck for the filesystem, "make clean"
for your compile tree, removal of stray temp files etc for other subsystems.

On the other extreme we could allow full transactions encompassing
multiple changes to multiple files which a guarantee to be either committed
completely or not at all after a crash.

We gave up on the first extreme about a decade ago when journalling
filesystems became available for Linux.  There seems to be little desire to
pay the cost of ever implementing the other extreme in general purpose
filesystems.

So the important question is "Where on that spectrum of options should we be?"
The answer has to be based on cost/benefit.  The cost of adding journalling
was quite high, but the benefit of not having to fsck an enormous filesystem
after a crash is enormous, so it is a cost we have chosen to pay.

If you want some extra level of atomicity, you need to demonstrate either a
high benefit or a low cost.  There seems to be some scepticism as to whether
you can.  A convincing use-case might demonstrate the high benefit.  Working
code might demonstrate low cost.  But you really need to provide at least one 
(ideally both) or people are unlikely to take you seriously.

NeilBrown

--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux