On Thu, Feb 02, 2012 at 10:04:59AM +0100, Bernd Schubert wrote: > I think the point for network file systems is that they can reuse the > disk-checksum for network verification. So instead of calculating a > checksum for network and disk, just use one for both. The checksum also > is supposed to be cached in memory, as that avoids re-calculation for > other clients. > > 1) > client-1: sends data and checksum > > server: Receives those data and verifies the checksum -> network > transfer was ok, sends data and checksum to disk > > 2) > client-2 ... client-N: Ask for those data > > server: send cached data and cached checksum > > client-2 ... client-N: Receive data and verify checksum > > > So the hole point of caching checksums is to avoid the server needs to > recalculate those for dozens of clients. Recalculating checksums simply > does not scale with an increasing number of clients, which want to read > data processed by another client. This makes sense indeed. My argument was only about the exposure of the storage hw format cksum to userland (through some new ioctl for further userland verification of the pagecache data in the client pagecache, done by whatever program is reading from the cache). The network fs client lives in kernel, the network fs server lives in kernel, so no need to expose the cksum to userland to do what you described above. I meant if we can't trust the pagecache to be correct (after the network fs client code already checked the cksum cached by the server and sent to the client along the server cached data), I don't see much value added through a further verification by the userland program running on the client and accessing pagecache in the client. If we can't trust client pagecache to be safe against memory bitflips or software bugs, we can hardly trust the anonymous memory too. -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html