On Wed, Aug 13, 2008 at 4:52 AM, Tejun Heo <htejun@xxxxxxxxx> wrote:> Artem Bokhan wrote:>> Tejun Heo пишет:>>> Unfortunately, libata core layer isn't ready for>>> this yet and spews ugly warning message and malfunctions on this.>> When NCQ is disabled, it will probably add tiny bit to inter command> latency thus reducing performance a bit but I don't think it will stand> out in any way. When you count in the seek time and all, the inter> command latency should be negligible in most cases. When there is no seek time it does matter (e.g. SSD). To testcommand perf with a normal HD, excercise the disk cache withthis fio command line:/root/fio --runtime=30 --time_based --bwavgtime=5000 --thread--numjobs=1 --iodepth=1 -rw=randrw --norandommap -overwrite=1--direct=1 --ioengine=sync --ioscheduler=noop --bs=4k --size=4k--name=hd_sdc --filename=/dev/sdc and will get output which contains something like:hd_sdc: (g=0): rw=randrw, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1Starting 1 threadJobs: 1 (f=1): [m] [96.7% done] [ 18381/ 17705 kb/s] [eta 00m:01s]hd_sdc: (groupid=0, jobs=1): err= 0: pid=17079 read : io=528344KiB, bw=18476KiB/s, iops=4510, runt= 29281msec clat (usec): min=82, max=131, avg=93.11, stdev= 1.67 write: io=527792KiB, bw=18235KiB/s, iops=4452, runt= 29637msec clat (usec): min=84, max=209, avg=93.88, stdev= 1.60... lat (usec): 100=99.60%, 250=0.40%... The drive/controller config reports almost 9000 iops from a single drive.This compares well with existing SSDs on the market. And looks like fio needs better granularity in it's latency buckets. hth,grant > --> tejun> --> To unsubscribe from this list: send the line "unsubscribe linux-ide" in> the body of a message to majordomo@xxxxxxxxxxxxxxx> More majordomo info at http://vger.kernel.org/majordomo-info.html>��.n��������+%������w��{.n�����{��'^�)��jg��������ݢj����G�������j:+v���w�m������w�������h�����٥