On 07/07/2010 03:04 PM, phil cryer wrote: >> We set up a small test case in our environment to test Gluster / ext4 in a >> simple 4-node client-replication setup. After running it through the >> regular Bonnie / IOZone / FFSB tests, we determined that it _worked_, but >> that compared to ext3, we saw some strange timing results overall (wierd lag >> spikes, etc). Unfortunately the project was scrapped early on (for external >> reasons), and no further investigation was done. YMMV. > > I'm running ext4 on my gluster cluster, can you share some of the > data, or methods/commands you ran? I'd be happy to spec out what ext4 > looks like for me (we're just hosting files for web access, so we're > expecting it to be able to handle it), and post the results online to > share. Nothing special, frankly, but sure : Filecreation : http://nfsv4.bullopensource.org/tools/tests_tools/test_files.py with script : #!/bin/bash LOOP=0 while [ $LOOP -lt 1000 ] do # tee and cat are optional for user views only, of course # test_files.py edited to point to gluster mount time ./test_files.py | tee -a go_test_files.log cat ./test_files_orw | tee -a go_test_files.log let LOOP=$LOOP+1 done FFSB (# yum install ffsb) with configs as supplied by the package (see shared docs), with configs modified to point to gluster mount : profile_appends profile_largefile_random_read profile_largefile_sequential_read profile_smallfile_reads profile_stress_test IOZone (# yum install iozone) with script : #!/bin/bash loop=0 while [ $loop -lt 100 ] do # tee optional, of course /usr/bin/iozone -ace -f /opt/gluster/iozone | tee -a iozone-stress.log let loop=$loop+1 done Bonnie++ (# yum install bonnie++) with script : #!/bin/bash loop=0 while [ $loop -lt 100 ] do # tee optional (time /usr/sbin/bonnie++ -d /opt/gluster/bonnie -u root) 3>&1 2>&3 | tee -a bonnie.log let loop=$loop+1 done We ran that exact suite, on all fedora systems, the only difference being that the client and server FS's were formatted ext3 during the first run through, and ext4 during the next. Hope that helps... -- Daniel Maher <dma+gluster AT witbe DOT net>