On 07/03/2013 11:00 AM, James Bottomley wrote:
On Wed, 2013-07-03 at 10:56 -0400, Ric Wheeler wrote:
On 07/03/2013 10:38 AM, Chris Mason wrote:
Quoting Ric Wheeler (2013-07-03 10:34:04)
As I was out walking Skeeter this morning, I was thinking a bit about the new
T10 atomic write proposal that Chris spoke about some time back.
Specifically, I think that we would see a value only if the atomic write was
also durable - if not, we need to always issue a SYNCHRONIZE_CACHE command which
would mean it really is not effectively more useful than a normal write?
Did I understand the proposal correctly? If I did, should we poke the usual T10
posse to nudge them (David Black, Fred Knight, etc?)...
I don't think the atomic writes should be a special case here. We've
already got the cache flush and fua machinery and should just apply it
on top of the atomic constructs...
-chris
I should have sent this to the linux-scsi list I suppose, but wanted clarity
before embarrassing myself :)
Yes, it is a better to have a wider audience
Adding in linux-scsi....
If we have to use fua/flush after an atomic write, what makes it atomic? Why
not just use a normal write?
It does not seem to add anything that write + flush/fua does?
It adds the all or nothing that we can use to commit journal entries
without having to worry about atomicity. The guarantee is that
everything makes it or nothing does.
I still don't see the difference in write + SYNC_CACHE versus atomic write +
SYNC_CACHE.
If the write is atomic and not durable, it is not really usable as a hard
promise until after we flush it somehow.
In theory, if we got ordered tags working to ensure transaction vs data
ordering, this would mean we wouldn't have to flush at all because the
disk image would always be journal consistent ... a bit like the old
soft update scheme.
James
Why not have the atomic write actually imply that it is atomic and durable for
just that command?
Ric
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html