On Sun, Jan 28, 2018 at 04:21:59PM -0800, James Bottomley wrote: > OK, so I don't believe that to be true either. Secure Boot was > something we did based on MS mandated technologies and something *some* > people though we had to impose strange policies over to please MS. > However, IMA was never part of that secure boot solution, so trying to > tar it with the same brush is unfair (and inaccurate). This was based on an assertion Mimi made that we had to do the full data checksum verification at file open time due to requirements of Trusted Boot. I know I am incredibility privileged in that I don't have to worry about Trusted Boot, so I don't have any personal knowledge one way or another; this was a claim articulated by Mimi. > The question I'm poking at is how integrity is enforced. Right at the > moment it's a small number of security hooks but they're on the fops > gates (exec and the like). To verify a per page hash, they'd have to > be in the mm subsystem as well (regardless of whether it's IMA or fs- > verity doing it) ... unless you're planning to ignore all the security > hooks as well. The fs-verity design plumbs this into the file system's readpage methods, just like we do with fs/crypto. Again, the idea was to make something easy to use that would require minimal changes to the file system (just as minimal changes are needed for fscrypt), and where you could query the file to see if the verity bit is set, and that would be the hook for the LSM's --- if you want to use LSM's. Essentially the file system would provide the mechanism (data integrity verification cleanly hooked into the file system's readpage method) and the policy could be done using an LSM, but it could potentially be done via other, more simpler mechanisms. I think one of the things that made IMA challenging was that it was a separate, foreign body that was stapled on top of the file system. We're using a different approach, where it is integrated into the file system, which makes the avoidance of locking problems *much* simpler, since we're not trying to do file reads triggered by LSM hooks. > > So in my opinion, clean design of the kernel trumps the requirement > > of "not one change, not one jot, in the Docker client". > > OK, bad example on my part, thanks to runc and containerd I don't give > much of a toss about the docker client. I care much more about > compliance with the container runtime standard. At it's base that has > anything you can do to tar is fine because it uses tar to define the > image. I buy that we can modify tools easily, but the same doesn't > apply to standards. OK, so what you care about the is the file format. Yes? So if there as a solution which enapculated the information needed to create the fs-verity header and the PKCS7 signature in an xattr --- which is how you carry it around in the tar image --- and when the tarfile is unpacked, the software which does the unpacking calls a library which checks for the xattr, removes it, writes out the fsverity header and Merkle tree, and then calls the ioctl which sets the "verity" bit, thus instantiating the data integrity protection, would this meet your requirements? In other words, the xattr in the tar file is just the method for carrying the information; and (a) it is not how the information would be stored in the underlying file system when it is actually used, and (b) it requires the userspace code to do this transformation, so we don't have to build the Merkle tree in the kernel. Is this sufficient for your container use case? Whether we require IMA as a dependency for fs-verity then because a separable question, and I think that basically boils down to (a) what value-add does using IMA bring to fs-verity, and (b) what complexity does IMA impose on fs-verity. That's a pretty simple cost benefit analysis. And if the IMA integration is optional, that might be the best win-win scenario. People who want the extra value of IMA, can pay the costs (which might include the complexity burden imposed by inadequate documentation), and those that don't, can skip it. - Ted