Howard Chu wrote:
Andy Rudoff wrote:
On the other side of the coin, I remember Dave talking about this
during our NVM discussion at LSF last year and I got the impression
the size and number of writes he'd need supported before he could
really stop using his journaling code was potentially large. Dave:
perhaps you can re-state the number of writes and their total size
that would have to be supported by block level atomics in order for
them to be worth using by XFS?
If you're dealing with a typical update-in-place database then there's no
upper bound on this, a DB transaction can be arbitrarily large and any partial
write will result in corrupted data structures.
On the other hand, with a multi-version copy-on-write DB (like mine,
http://symas.com/mdb/ ) all you need is a guarantee that all data writes
complete before any metadata is updated.
IMO, catering to the update-in-place approach is an exercise in futility since
it will require significant memory resources on every link in the storage
chain and whatever amount you have available will never be sufficient.
My proposal from last November could be implemented without requiring any more
state than already present in current storage controllers.
http://www.spinics.net/lists/linux-fsdevel/msg70047.html
--
-- Howard Chu
CTO, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html