[PATCH] fsstress: avoid splice_f generating too large sparse file

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]



Thanks to Darrick J. Wong find this issue! Current splice_f generates
file offset as below:

  lr = ((int64_t)random() << 32) + random();
  off2 = (off64_t)(lr % maxfsize);

It generates a pseudorandom 64-bit candidate offset for the
destination file where we'll land the splice data, and then caps the
offset at maxfsize (which is 2^63- 1 on x64), which effectively means
that the data will appear at a very high file offset which creates
large (sparse) files very quickly.

That's not what we want, and some case likes shared/009 will take
forever to run md5sum on lots of huge files.

Signed-off-by: Zorro Lang <zlang@xxxxxxxxxx>
---
 ltp/fsstress.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/ltp/fsstress.c b/ltp/fsstress.c
index 68e40ee6..cacaedbc 100644
--- a/ltp/fsstress.c
+++ b/ltp/fsstress.c
@@ -2874,10 +2874,11 @@ splice_f(int opno, long r)
 
 	/*
 	 * splice can overlap write, so the offset of the target file can be
-	 * any number (< maxfsize)
+	 * any number. But to avoid too large offset, add a clamp of 1024 blocks
+	 * past the current dest file EOF
 	 */
 	lr = ((int64_t)random() << 32) + random();
-	off2 = (off64_t)(lr % maxfsize);
+	off2 = (off64_t)(lr % MIN(stat2.st_size + (1024ULL * stat2.st_blksize), MAXFSIZE));
 
 	/*
 	 * Due to len, off1 and off2 will be changed later, so record the
-- 
2.17.2




[Index of Archives]     [Linux Filesystems Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux