On Sun, Jan 28, 2018 at 08:53:41PM -0500, Mimi Zohar wrote: > A lot of people have requested being able to identify files based on > pathnames. I don't need to tell you this isn't safe from a security > perspective. So how would you identify these few files? I doubt you > are planning on hard coding them. If you have a generic solution, I > would really be interested in hearing it. So what we are implementing with fs-verity is that verification is enabled by setting a flag in the inode ("the verity bit") which causes the file system to enforce the data integrity checks. This bit can be checked by using the FS_IOC_GETFLAGS ioctl (like any of the other existing file system flags, such as immutable, append-only, no-COW, etc.) What this means is that it would be possible for userspace application to simply open a file (which might, for example, be a privileged APK) and before using it, check the verity bit via the open file descriptor. If the verity bit is set, then the userspace application can safely read from the file, and no that it hasn't been tampered with, even via an off-line evil maid attack. In this particular case case, there isn't even a need to use SELinux, or indeed, any LSM at all. No need to compile in EVM, no need to compile in IMA, no need to compile in SELinux, etc. In other use cases, whether or not a file has the verity bit set could be used by an LSM that wishes to make policy restrictions --- for example: "if the file is setuid, then the verity bit *must* be set". Or there could be a policy where all executables *must* have the verity bit set. This model has the advantage of a very clean separation between the policy and the mechanism, where the mechanism exports a single bit, "is this file one which is protected by the verity bit"? Granted, it is a different model than what IMA/EVM use. But it is much simpler, and it is optimized for use cases where most of the files might not be data integrity protected (perhaps because most of then security-critical files are located on a read-only volume being protected using dm-verity). Because we use a Merkle tree, we are also making the tradeoff between a complete verification of the entire contents of the file at file open time (which imposes a file open latency, and means that if the file can be tampered after it is opened, IMA won't detect the problem), versus verification at readpage time (which means that you might fail while reading the file, instead of finding out at open time). This is again consistent with dm-verity, where we do not attempt to verify the checksum of the entire block device at system startup; instead we check on each block read, and if the verification fails, we fail the read in question. For some use cases, the use of a full-file hash ala today's IMA-Appraisal might be a better choice. I have never claimed that fs-verity was intended to be a replacemnt for IMA. Cheers, - Ted P.S. I wonder if it was a mistake to not choose a whole new name for IMA-Appraisal. There are lots of documentations on the web which talk about "IMA", and it's not clear if it supposed to mean "IMA-Measure", or a generic term encompassing "IMA-Appraisal" and "IMA-Meaure". One might be able to guess based on how out-of-date the rest of web page happens to be, but it's really not clear. Also, the two concepts are quite different, and data integrity via checking a digitally signed hash is only partially related to "measuring" a file. Perhaps it's related by virtue of the fact that you have to calculate a cryptographic checksum over the entire file. But once you get to data integrity protected via a Merkle tree at file read time, this is extremely quite far away from any traditional definition of "measurement". So purely from a naming convention, perhaps trying to take data integrity verification using Merkle trees should forcing it into the IMA framework might not be such a great fit.