On Tue, Oct 29, 2002 at 04:27:32PM +0100, Antonello Piemonte wrote: > for a start I have set my "baseline" measurement: using a 10000 rpm IBM > Ultrastar with Adaptec 29160 Ultra160 SCSI adapter > with 512Mb RAM and PIII-800 I can create 1000 4kb files > in one directory (this case is what is going to happen in the real > application) in abou 8secs using this: > -- > #!/bin/sh > > i=1 > while [ $i -le 1000 ] > do > echo $i > dd if=/dev/zero of=file_$i bs=1k count=4 > i=`expr $i + 1` > done > exit 0 > -- > > $ time sh write.sh > ... > real 0m8.388s > user 0m4.037s > sys 0m3.834s > > writing 10.000 files gives instead > $ time sh write.sh > real 1m46.499s > user 0m40.096s > sys 1m0.400s > > so a 10-folds increase of number of files gives about > a 12-fold increase in "real" time ... > I will play next with journaling mode (using data=ordered now) > and post outcome here: thanks for your comments so far ! > antonello Get rid of while loop in favor of the for loop, your spending alot of time with the shell! (It's bad enough this test involves spawning 10,000 copies of dd!). Also don't do an echo, and pipe dd's output to dev/null. You'll increase your numbers -alot- this way. (Especially if you are in a fb-console or xterm, I/O to the screen costs -alot- then). -Gryn (Adam Luter) - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html