On Sun, 18 May 2008 21:29:30 -0500 Eric Sandeen <sandeen@xxxxxxxxxx> wrote: > Theodore Tso wrote: > ... > > > Given how rarely people have reported problems, I think it's a really > > good idea to understand what exactly our exposure is for > > $COMMON_HARDWARE. > > I'll propose that very close to 0% of users will ever report "having > barriers off seems to have corrupted my disk on power loss!" even if > that's exactly what happened. And it'd be very tricky to identify in a > post-mortem. Instead we'd probably see other weird things caught down > the road during some later fsck or during filesystem use, and then > suggest that they go check their cables, run memtest86 or something... > > Perhaps it's not the intent of this reply, Ted, but various other bits > of this thread have struck me as trying to rationalize away the problem. Not really. It's a matter of understanding how big the problem is. We know what the cost of the solution is, and it's really large. It's a tradeoff, and it is unobvious where the ideal answer lies, especially when not all the information is available. > If the discussion were about proper locking to avoid corruption, would > we really be saying well, gosh, it's a *really* small window, and > *most* people won't hit it very often, and proper locking would slow > things down.... If it slowed really really important workloads by 30% then we'd be running around with our hair on fire fixing that up. But fixing this one is nowhere near as easy as fixing some locking thing. > So I think that as you suggest, looking for ways to make barriers less > painful is the far better route, rather than sacrificing correctness for > speed by turning them off by default when we know there is a chance for > problems. People running journaling filesystems most likely expect to > be safe from this sort of thing, not most of the time, but all of the time. Well. Reducing the cost would of course make the decision easy. -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html