On Thu, 2013-10-24 at 16:30 +0000, Myklebust, Trond wrote: > Those programs need to recompute the checksum data anyway in order to > verify and/or update it. Checksums that are computed by some third > party application have exactly zero value for integrity checking. No that's exactly the point,... the applications should _NOT_ set those checksums, especially not automagically (since then you'd never notice when just some application is buggy or writes/modifies when you don't expect it to do so). The idea is that there is on application (in my case it's just a script), which sets the integrity data and verifies it. This works very well for e.g. large data archives, where you most of the time (but not always) only read files, write new files or move around existing ones - but only rarely modify existing file's contents. I do this already like that on local filesystems, which works very nicely with XATTRs... but now I want to move this on a central data cluster (where clients connect to via NFS)... and here the problems start... when I add new data to the archive (from the clients) I cannot have XATTRs attached, nor can I verify them form the clients. Cheers, Chris.
Attachment:
smime.p7s
Description: S/MIME cryptographic signature