btrfs, ext4, and f2fs cache the Merkle tree past EOF, which restricts the maximum file size beneath the normal maximum. Test the logic in those filesystems against files with sizes near the maximum. To work properly, this does require some understanding of the practical but not standardized layout of the Merkle tree. This is a bit unpleasant and could make the test incorrect in the future, if the implementation changes. On the other hand, it feels quite useful to test this tricky edge case. It could perhaps be made more generic by adding some ioctls to let the file system communicate the maximum file size for a verity file or some information about the storage of the Merkle tree. Signed-off-by: Boris Burkov <boris@xxxxxx> --- common/verity | 11 ++++++ tests/generic/690 | 86 +++++++++++++++++++++++++++++++++++++++++++ tests/generic/690.out | 7 ++++ 3 files changed, 104 insertions(+) create mode 100755 tests/generic/690 create mode 100644 tests/generic/690.out diff --git a/common/verity b/common/verity index 74163987..ca080f1e 100644 --- a/common/verity +++ b/common/verity @@ -340,3 +340,14 @@ _fsv_scratch_corrupt_merkle_tree() ;; esac } + +_require_fsverity_max_file_size_limit() +{ + case $FSTYP in + btrfs|ext4|f2fs) + ;; + *) + _notrun "$FSTYP does not store verity data past EOF; no special file size limit" + ;; + esac +} diff --git a/tests/generic/690 b/tests/generic/690 new file mode 100755 index 00000000..251f3cc8 --- /dev/null +++ b/tests/generic/690 @@ -0,0 +1,86 @@ +#! /bin/bash +# SPDX-License-Identifier: GPL-2.0 +# Copyright (c) 2021 Facebook, Inc. All Rights Reserved. +# +# FS QA Test 690 +# +# fs-verity requires the filesystem to decide how it stores the Merkle tree, +# which can be quite large. +# It is convenient to treat the Merkle tree as past EOF, and ext4, f2fs, and +# btrfs do so in at least some fashion. This leads to an edge case where a +# large file can be under the file system file size limit, but trigger EFBIG +# on enabling fs-verity. Test enabling verity on some large files to exercise +# EFBIG logic for filesystems with fs-verity specific limits. +# +. ./common/preamble +_begin_fstest auto quick verity + + +# Import common functions. +. ./common/filter +. ./common/verity + +# real QA test starts here +_supported_fs generic +_require_test +_require_math +_require_scratch_verity +_require_fsverity_max_file_size_limit +_require_scratch_nocheck + +_scratch_mkfs_verity &>> $seqres.full +_scratch_mount + +fsv_file=$SCRATCH_MNT/file.fsv + +max_sz=$(_get_max_file_size) +_fsv_scratch_begin_subtest "way too big: fail on first merkle block" +# have to go back by 4096 from max to not hit the fsverity MAX_LEVELS check. +truncate -s $(($max_sz - 4095)) $fsv_file +_fsv_enable $fsv_file |& _filter_scratch + +# The goal of this second test is to make a big enough file that we trip the +# EFBIG codepath, but not so big that we hit it immediately as soon as we try +# to write a Merkle leaf. Because of the layout of the Merkle tree that +# fs-verity uses, this is a bit complicated to compute dynamically. + +# The layout of the Merkle tree has the leaf nodes last, but writes them first. +# To get an interesting overflow, we need the start of L0 to be < MAX but the +# end of the merkle tree (EOM) to be past MAX. Ideally, the start of L0 is only +# just smaller than MAX, so that we don't have to write many blocks to blow up, +# but we take some liberties with adding alignments rather than computing them +# correctly, so we under-estimate the perfectly sized file. + +# We make the following assumptions to arrive at a Merkle tree layout: +# The Merkle tree is stored past EOF aligned to 64k. +# 4K blocks and pages +# Merkle tree levels aligned to the block (not pictured) +# SHA-256 hashes (32 bytes; 128 hashes per block/page) +# 64 bit max file size (and thus 8 levels) + +# 0 EOF round-to-64k L7L6L5 L4 L3 L2 L1 L0 MAX EOM +# |-------------------------| ||-|--|---|----|-----|------|--|!!!!!| + +# Given this structure, we can compute the size of the file that yields the +# desired properties. (NB the diagram skips the block alignment of each level) +# sz + 64k + sz/128^8 + 4k + sz/128^7 + 4k + ... + sz/128^2 + 4k < MAX +# sz + 64k + 7(4k) + sz/128^8 + sz/128^7 + ... + sz/128^2 < MAX +# sz + 92k + sz/128^2 < MAX +# (128^8)sz + (128^8)92k + sz + (128)sz + (128^2)sz + ... + (128^6)sz < (128^8)MAX +# sz(128^8 + 128^6 + 128^5 + 128^4 + 128^3 + 128^2 + 128 + 1) < (128^8)(MAX - 92k) +# sz < (128^8/(128^8 + (128^6 + ... + 128 + 1)))(MAX - 92k) +# +# Do the actual caclulation with 'bc' and 20 digits of precision. +# set -f prevents the * from being expanded into the files in the cwd. +set -f +calc="scale=20; ($max_sz - 94208) * ((128^8) / (1 + 128 + 128^2 + 128^3 + 128^4 + 128^5 + 128^6 + 128^8))" +sz=$(echo $calc | $BC -q | cut -d. -f1) +set +f + +_fsv_scratch_begin_subtest "still too big: fail on first invalid merkle block" +truncate -s $sz $fsv_file +_fsv_enable $fsv_file |& _filter_scratch + +# success, all done +status=0 +exit diff --git a/tests/generic/690.out b/tests/generic/690.out new file mode 100644 index 00000000..a3e2b9b9 --- /dev/null +++ b/tests/generic/690.out @@ -0,0 +1,7 @@ +QA output created by 690 + +# way too big: fail on first merkle block +ERROR: FS_IOC_ENABLE_VERITY failed on 'SCRATCH_MNT/file.fsv': File too large + +# still too big: fail on first invalid merkle block +ERROR: FS_IOC_ENABLE_VERITY failed on 'SCRATCH_MNT/file.fsv': File too large -- 2.33.0