Hi Jon, Could you please tell us which xfstest's version are you used? If I remember correctly, testcases 301-304 have a bug at some certain version. Could you please upgrade your xfstests to the latest version? Or as Ted suggested, you could use xfstests-bld. Regards, - Zheng On Thu, Jun 13, 2013 at 01:17:30AM -0400, jon ernst wrote: > Hi All, > about xfstestI installed fio. But still got error when I run ext4 test 301. > Could anyone enlight me what;s wrong with my configuration? > I have latest ext4 dev branch code with latest xfstest code. > fio version is 2.1.1 > > this is whole log: > Thank you! > > fio: failed parsing ioengine=ioe_e4defrag > fio: job global dropped > fio valid values: sync Use read/write > : psync Use pread/pwrite > : vsync Use readv/writev > : libaio Linux native asynchronous IO > : posixaio POSIX asynchronous IO > : mmap Memory mapped IO > : splice splice/vmsplice based IO > : netsplice splice/vmsplice to/from the network > : sg SCSI generic v3 IO > : null Testing engine (no data transfer) > : net Network IO > : syslet-rw syslet enabled async pread/pwrite IO > : cpuio CPU cycle burner engine > : binject binject direct inject block engine > : rdma RDMA IO engine > : external Load external engine (append name) > > > fio --ioengine=ioe_e4defrag --iodepth=1 --directory=/device8 --filesize=3424725990 --size=999G --buffered=0 --fadvise_hint=0 --name=defrag-4k --ioengine=e4defrag --iodepth=1 --bs=128k --donorname=test1.def --inplace=0 --rw=write --numjobs=4 --runtime=30*1 --time_based --filename=test1 --name=aio-dio-verifier --ioengine=libaio --iodepth=128*1 --numjobs=1 --verify=crc32c-intel --verify_fatal=1 --verify_dump=1 --verify_backlog=1024 --verify_async=1 --verifysort=1 --direct=1 --bs=64k --rw=randwrite --runtime=30*1 --time_based --filename=test1 > mke2fs 1.42 (29-Nov-2011) > Filesystem label= > OS type: Linux > Block size=4096 (log=2) > Fragment size=4096 (log=2) > Stride=0 blocks, Stripe width=0 blocks > 629552 inodes, 2518180 blocks > 125909 blocks (5.00%) reserved for the super user > First data block=0 > Maximum filesystem blocks=2579496960 > 77 block groups > 32768 blocks per group, 32768 fragments per group > 8176 inodes per group > Superblock backups stored on blocks: > 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632 > > Allocating group tables: > > # Common e4defrag regression tests > [global] > ioengine=ioe_e4defrag > iodepth=1 > directory=/device8 > filesize=3424725990 > size=999G > buffered=0 > fadvise_hint=0 > > ################################# > # Test1 > # Defragment file while other task does direct io > > # Continious sequential defrag activity > [defrag-4k] > ioengine=e4defrag > iodepth=1 > bs=128k > donorname=test1.def > filename=test1 > inplace=0 > rw=write > numjobs=4 > runtime=30*1 > time_based > > # Verifier > [aio-dio-verifier] > ioengine=libaio > iodepth=128*1 > numjobs=1 > verify=crc32c-intel > verify_fatal=1 > verify_dump=1 > verify_backlog=1024 > verify_async=1 > verifysort=1 > direct=1 > bs=64k > rw=randwrite > filename=test1 > runtime=30*1 > time_based > # /usr/bin/fio /tmp/5884.fio > fio: engine libaio not loadable > fio: failed to load engine libaio > defrag-4k: (g=0): rw=write, bs=128K-128K/128K-128K/128K-128K, ioengine=e4defrag, iodepth=1 > ... > defrag-4k: (g=0): rw=write, bs=128K-128K/128K-128K/128K-128K, ioengine=e4defrag, iodepth=1 > fio: file:ioengines.c:99, func=dlopen, error=libaio: cannot open shared object file: No such file or directory > failed: '/usr/bin/fio /tmp/5884.fio' > fio --ioengine=ioe_e4defrag --iodepth=1 --directory=/device8 --filesize=3424725990 --size=999G --buffered=0 --fadvise_hint=0 --name=defrag-4k --ioengine=e4defrag --iodepth=1 --bs=128k --donorname=test1.def --inplace=0 --rw=write --numjobs=4 --runtime=30*1 --time_based --filename=test1 --name=aio-dio-verifier --ioengine=libaio --iodepth=128*1 --numjobs=1 --verify=crc32c-intel --verify_fatal=1 --verify_dump=1 --verify_backlog=1024 --verify_async=1 --verifysort=1 --direct=1 --bs=64k --rw=randwrite --runtime=30*1 --time_based --filename=test1 > mke2fs 1.42 (29-Nov-2011) > Filesystem label= > OS type: Linux > Block size=4096 (log=2) > Fragment size=4096 (log=2) > Stride=0 blocks, Stripe width=0 blocks > 629552 inodes, 2518180 blocks > 125909 blocks (5.00%) reserved for the super user > First data block=0 > Maximum filesystem blocks=2579496960 > 77 block groups > 32768 blocks per group, 32768 fragments per group > 8176 inodes per group > Superblock backups stored on blocks: > 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632 > > Allocating group tables: > > # Common e4defrag regression tests > [global] > ioengine=ioe_e4defrag > iodepth=1 > directory=/device8 > filesize=3424725990 > size=999G > buffered=0 > fadvise_hint=0 > > ################################# > # Test1 > # Defragment file while other task does direct io > > # Continious sequential defrag activity > [defrag-4k] > ioengine=e4defrag > iodepth=1 > bs=128k > donorname=test1.def > filename=test1 > inplace=0 > rw=write > numjobs=4 > runtime=30*1 > time_based > > # Verifier > [aio-dio-verifier] > ioengine=libaio > iodepth=128*1 > numjobs=1 > verify=crc32c-intel > verify_fatal=1 > verify_dump=1 > verify_backlog=1024 > verify_async=1 > verifysort=1 > direct=1 > bs=64k > rw=randwrite > filename=test1 > runtime=30*1 > time_based > # /usr/bin/fio /tmp/3999.fio > fio: engine libaio not loadable > fio: failed to load engine libaio > defrag-4k: (g=0): rw=write, bs=128K-128K/128K-128K/128K-128K, ioengine=e4defrag, iodepth=1 > ... > defrag-4k: (g=0): rw=write, bs=128K-128K/128K-128K/128K-128K, ioengine=e4defrag, iodepth=1 > fio: file:ioengines.c:99, func=dlopen, error=libaio: cannot open shared object file: No such file or directory > failed: '/usr/bin/fio /tmp/3999.fio' > fio --ioengine=ioe_e4defrag --iodepth=1 --directory=/device8 --filesize=3424725990 --size=999G --buffered=0 --fadvise_hint=0 --name=defrag-4k --ioengine=e4defrag --iodepth=1 --bs=128k --donorname=test1.def --inplace=0 --rw=write --numjobs=4 --runtime=30*1 --time_based --filename=test1 --name=aio-dio-verifier --ioengine=libaio --iodepth=128*1 --numjobs=1 --verify=crc32c-intel --verify_fatal=1 --verify_dump=1 --verify_backlog=1024 --verify_async=1 --verifysort=1 --direct=1 --bs=64k --rw=randwrite --runtime=30*1 --time_based --filename=test1 > mke2fs 1.42 (29-Nov-2011) > Filesystem label= > OS type: Linux > Block size=4096 (log=2) > Fragment size=4096 (log=2) > Stride=0 blocks, Stripe width=0 blocks > 629552 inodes, 2518180 blocks > 125909 blocks (5.00%) reserved for the super user > First data block=0 > Maximum filesystem blocks=2579496960 > 77 block groups > 32768 blocks per group, 32768 fragments per group > 8176 inodes per group > Superblock backups stored on blocks: > 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632 > > Allocating group tables: > > # Common e4defrag regression tests > [global] > ioengine=ioe_e4defrag > iodepth=1 > directory=/device8 > filesize=3424725990 > size=999G > buffered=0 > fadvise_hint=0 > > ################################# > # Test1 > # Defragment file while other task does direct io > > # Continious sequential defrag activity > [defrag-4k] > ioengine=e4defrag > iodepth=1 > bs=128k > donorname=test1.def > filename=test1 > inplace=0 > rw=write > numjobs=4 > runtime=30*1 > time_based > > # Verifier > [aio-dio-verifier] > ioengine=libaio > iodepth=128*1 > numjobs=1 > verify=crc32c-intel > verify_fatal=1 > verify_dump=1 > verify_backlog=1024 > verify_async=1 > verifysort=1 > direct=1 > bs=64k > rw=randwrite > filename=test1 > runtime=30*1 > time_based > # /usr/bin/fio /tmp/12231.fio > fio: engine libaio not loadable > fio: failed to load engine libaio > defrag-4k: (g=0): rw=write, bs=128K-128K/128K-128K/128K-128K, ioengine=e4defrag, iodepth=1 > ... > defrag-4k: (g=0): rw=write, bs=128K-128K/128K-128K/128K-128K, ioengine=e4defrag, iodepth=1 > fio: file:ioengines.c:99, func=dlopen, error=libaio: cannot open shared object file: No such file or directory > failed: '/usr/bin/fio /tmp/12231.fio' > -- > To unsubscribe from this list: send the line "unsubscribe linux-ext4" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html