Re: [LSF/MM TOPIC] fs-verity: file system-level integrity protection

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm working on an implementation of fs-verity.

On Thu, Jan 25, 2018 at 04:47:46PM -0800, James Bottomley wrote:
> The cost of this is presumably one hash per page in the tree, so it
> costs quite a bit in terms of space.  Presumably the hash tree is
> also dynamically resident meaning a page fault could now also
> potentially fault in the hash tree, leading to a lot of sub optimal
> I/O patterns?

Good observation.  I'm managing all of the data pages and their
associated authenticated dictionary structure (i.e., Merkle tree)
pages in the same (existing) inode page cache.  I'm determining what
the set of pages are, and for any auth pages not up-to-date in the
cache, I'm issuing the read request for those pages together with the
data pages.  I expect that any block-level optimizations that normally
occur with data pages will occur with both the data and auth pages.

I'm introducing a new shared control structure for the group of bio
structs that cover all of the pages.  Since all of the dependent auth
pages that aren't up-to-date in the cache need to complete before we
can complete the data page, the I/O completion is decrementing the
refcount of the shared data structure and measuring the pages in the
bio.  The last to decrement owns walking all bio structs in the group
and performing the page completion ops (set error/up-to-date and
unlock) after validating the hash values.



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux