Hello, all. Questions... * Meta-level question: is there a Web-based discussion forum for DM-integrity? I searched for one but did not find one -- I found only the email address "dm-crypt@xxxxxxxx". * What is/are the difference[s] between "direct" and "bitmap" in DM-integrity? * Does "bitmap mode" in DM-integrity write the bitmap to the underlying _storage_ while the system is running [as opposed to e.g. only at {normal-shutdown}-time]? My concern here is that what _should_ be large high-throughput streaming writes could become head-seeking hard-drive-destroying events when some/all of the underlying storage is on "spinning rust". For background/critique/feedback... * I _don`t_ want to use DM-integrity _journalling_, because I plan to use a journaling _filesystem_ at the top of the relevant stack, and because -- for the currently-planned application, please see my RAID5 construction plan below -- getting only one-_quarter_ of the underlying throughput [one quarter at _best_, i.e. _very_-naively assuming no/zero _seeking_ overhead _despite_ planning to use spinning rust for the primary storage] in exchange for greatly-enhanced data safety in the event of a corner-case {crash,_reset,_or_power-off}_during_a_write is _not_ an acceptable trade-off IMO. * I currently plan to use a stack that will look at least something like this: [the top level] a modern checksumming filesystem [probably BtrFS -- almost-certainly _not_ ZFS, due to the issues inherent to/with using out-of-Linus`s-kernel-tree filesystems directly -- as opposed to e.g. in a virtual machine or via NFS -- on a Linux box] Bcache [to try to keep the filesystem metadata on a mirrored pair of SSDs rather than allowing metadata to be stored only on the spinning rust] Bcache-wise _cache_ storage ---------------------------------------- sub-layer 1: DM-integrity, so that if something in the cache sub-stack goes wrong in a detectable way, the in-kernel Bcache _might_ [if it is "smart enough"] "realize" that it should read the relevant block[s] from the primary storage instead, as a fallback sub-layer 2: Linux MD RAID1, i.e. mirroring sub-layer 3 [the bottom of this cache sub-stack]: two SATA SSDs Bcache-wise _primary_ storage ------------------------------------------ Linux MD RAID5 ---------------------- * Using the RAID5-specific in-kernel cache code so as to both alleviate write-hole concerns _and_ -- at least if choosing to use write-_back_ mode -- increase performance especially in the event of the next-up layer performing lots of _non_-{full stripe} writes * probably going to use this in write-_back_ mode, for performance, if at all... but I welcome arguments for write-_through_ RAID5-wise _cache_ storage --------------------------------------- sub-layer 1: Linux MD RAID1, i.e. mirroring sub-layer 2 [the bottom of this cache sub-stack]: two ultra-high-throughput writes-are-fine-with-me-I-am-not-an-SSD SCSI hard drives RAID5-wise _primary_ storage ----------------------------------------- DM-integrity to detect silent corruption in assumed-{untrustworthy-by-design} consumer-grade hard drives [at the bottom] raw partitions on >2 consumer-grade SATA hard drives Please poke holes -- "poking" with logic and detailed explanation[s], of course ;-) -- in my above "design", and explain to me why I am wrong/crazy/both. ;-) I am also planning to build a reliability-comes-first-and-performance-comes-second-or-third-or-never RAID1 [RAID1 for use with a _filesystem_ directly on top of it, i.e. _not_ RAID1 as a "subroutine" of a larger block stack (as above)] -- any advice on using DM-integrity [or telling me e.g. "don`t do that, and here`s why not" ;-)] in _this_ context is _also_ welcome. Sincerely, Abe _______________________________________________ dm-crypt mailing list dm-crypt@xxxxxxxx https://www.saout.de/mailman/listinfo/dm-crypt