ide vs. scsi read throughput

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,


While using kvm88, I tried comparing the read throughput between PIIX
IDE and the LSI 53c895a emulation.

My test code is:
while : ; do dd if=/dev/sda1 of=/dev/null bs=4096 count=1024000 >>
results_file 2>&1; done

I averaged the results with:
cat results_file | grep -v records | cut --delimiter=" " -f 8 | awk
'{n++; a[n]=$1; t+=$1; printf("cur=%d   avg=%f   n=%d\n", $1, t/n, n)}
 END {avg=t/n; for (i=1;i<=n;i++) sd+=(a[i] - avg) * (a[i] - avg);
printf("\n\nstd_dev=%f\n", sqrt(sd/n))}'


I ran a hundred loops on:
1. The native host's .img file
2. PIIX ide virtual drive
3. LSI scsi virtual drive

The average read throughput results I got are:
100 GB/sec on the native host (standard deviation is 3.7)
55 GB/sec on the virtual ide drive (standard deviation is 21.8)
60 GB/sec on the virtual scsi drive (standard deviation is 5.8)


Is there a way to make the virtual drives perform better and maybe get
results that are closer to the host's?


Another issue I noticed while looking at the results is large
fluctuations in ide results. The standard deviation of
the ide results is very large. A large portion of the results are
around 30GB/sec and around 80GB/sec.
Is there any meaningful explanation for this large variation? Is there
some issue in the QEMU's ide code path that causes this?


Thanks,
S
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux