On Wed, Nov 6, 2019 at 12:24 PM Chengguang Xu <cgxu519@xxxxxxxxxxxx> wrote: > > ---- 在 星期三, 2019-11-06 18:01:54 Amir Goldstein <amir73il@xxxxxxxxx> 撰写 ---- > > On Wed, Nov 6, 2019 at 9:40 AM Chengguang Xu <cgxu519@xxxxxxxxxxxx> wrote: > > > > > > Making many small holes in 10M test file seems not very > > > helpful for test coverage and it takes too much time on > > > creating test files. In order to improve test speed we > > > adjust test file size to (10 * iosize) for iosize aligned > > > hole files and meanwhile add more test patterns for small > > > random holes and small empty file. > > > > > > Signed-off-by: Chengguang Xu <cgxu519@xxxxxxxxxxxx> > > Reviewed-by: Amir Goldstein <amir73il@xxxxxxxxx> > > > > Please send me a plain text version of the patch so I can test it. > > > > Hi Amir, > > Sorry for that again but I really don't know what was wrong for this patch. > I sent using 'git send-email' and there was nothing broken or unusual compare > to other normal patches. So I have to send this patch in attachment again. > Test runs fine, except big random file has a single chunk 32MB of data: Big random hole test write scenarios --- /root/xfstests/bin/xfs_io -i -fc "pwrite 1024K 30862K" /vdf/ovl-lower/copyup_sparse_test_random_big wrote 31602688/31602688 bytes at offset 1048576 30 MiB, 7716 ops; 0.1295 sec (232.614 MiB/sec and 59553.1201 ops/sec) That is because of this typo: @@ -133,7 +133,7 @@ file_size=102400 min_hole=1024 max_hole=5120 pos=$min_hole -max_hole=$(($file_size - 2*$max_hole)) +max_pos=$(($file_size - 2*$max_hole)) If you re-submit, please add my Reviewed-by tag. Thanks, Amir.