On Thu, 2019-03-21 at 09:04 -0500, Chuck Lever wrote: > > > On Mar 21, 2019, at 6:44 AM, Mimi Zohar <zohar@xxxxxxxxxxxxx> wrote: > > On Wed, 2019-03-20 at 08:40 -0500, Chuck Lever wrote: > >>> On Mar 19, 2019, at 3:29 PM, Mimi Zohar <zohar@xxxxxxxxxxxxx> wrote: > >>> On Fri, 2019-03-08 at 16:29 -0500, Chuck Lever wrote: > >>> Thanks Serge for bringing this thread to my attention. Sorry for the > >>>>> On Mar 8, 2019, at 4:23 PM, Bruce Fields <bfields@xxxxxxxxxxxx> wrote: > >>>>> On Fri, Mar 08, 2019 at 04:11:06PM -0500, Chuck Lever wrote: > >>>>>>> On Mar 8, 2019, at 4:10 PM, bfields@xxxxxxxxxxxx wrote: > >>>>>>> On Thu, Mar 07, 2019 at 10:28:54AM -0500, Chuck Lever wrote: > >>>>>>>> The NFS server needs to allow NFS clients to perform their own > >>>>>>>> attestation and measurement. > >>> > >>> Measurement and attestation is only one aspect. The other aspect is > >>> verifying the integrity of files. Shouldn't the NFS server verify the > >>> integrity of a file before allowing it to be served (eg. malware)? > >> > >> Hi Mimi, thanks for the review. > >> > >> Architecturally, the server is not using the file's data, it is > >> merely part of the filesystem that stores it. But that said, there > >> are several concrete reasons why I feel an NFS server should not be > >> involved in measurement/attestation, but only with storing file > >> content and IMA metadata. > > > > "Remote attestation" is the process of verifying the measurement list > > against the TPM PCRs, based on a TPM quote. I think you meant > > "measurement/appraisal". > > > >> > >> 1. The broadest attack surface for a remote filesystem is modification > >> of data in flight. Attestation of the file on the server is not going > >> to defend against that attack, only attestation on the client will do > >> that. Is there a good reason to pay the cost of double attestation? > > > > Doesn't the server have a responsibility to provide files that have > > not been unintentionally or maliciously altered? > > It's a design goal of any filesystem to present unaltered file data > to applications. But the responsibility is end-to-end. Adding extra > checks in the middle introduce a cost. Files are measured/appraised/audited based on the IMA policy. Have you measured the performance cost of measuring and appraising the files being served? Unless a policy has been supplied, the performance impact, if any, would be limited to walking the IMA policy rules. > Measuring on the client is > sufficient, and it is equivalent to what local filesystems do (and, > it allows each client to apply its own security policy). I'm not arguing with you about an end-to-end file integrity solution. That is the goal, but one that assumes this proposed work, based on fs-verity signatures. > I'm going to claim here without proof that there is little value in > using IMA on an NFS server that serves NFS clients that are not > IMA-aware. :-) For systems that don't or haven't implemented the proposed end-to-end file integrity solution, verifying the file integrity on the server is all the more important. > > >> 2. It is possible (perhaps even likely) that the NFS server and a > >> client of that server will have different IMA policies and even > >> different file signing authorities. > > > > That doesn't negate the due diligence on the server's part of > > preventing the spread of malware. > > Commercial NFS servers (like NetApp filers) perform malware and > integrity checking via a scrubbing agent rather than checking in a > hot path. Filesystems are not only responsible for leaving data > unchanged, they also have performance requirements. Any userspace application leaves a window of opportunity between the time the file has been created/modified and the time that the application verifies it. This is one of the main reason for IMA being in the kernel. > > >> A third, perhaps related, reason is that NFS can run on non-Linux NFS > >> servers which would not have any attestation at all. An NFS client > >> should not have to rely on the server for attestation, but should > >> trust only its own measurement of each file, which would be done as > >> late as possible before use. > > > > The ima_file_check() hook can also audit the file, providing > > additional forensic information (eg. the file hash). > > IIUC, you are talking about troubleshooting, which should be > rare. That can be done with tools on the server if needed, but > IMO can be avoided in performance-sensitive paths. No, this isn't about "troubleshooting", but about auditing the files served and using the file hashes for forensic investigations.[1][2] Mimi [1] Commit e7c568e0fd0c ("ima: audit log hashes") [2] https://www.fireeye.com/blog/threat-research/2016/11/extending_linux_exec.html