Tejun Heo wrote:
Petr Vandrovec wrote:
For comparsion 1TB Hitachi behind 3726 PMP (again MS4UM) with sata_sil
patch I sent last week (no NCQ, 1.5Gbps link between 3512 and PMP, and
3.0Gbps link between PMP and drive... why is it faster?):
If you turn off NCQ by echoing 1 to /sys/block/sdd/device/queue_depth on
sata_sil24, does the performance change?
I have recompiled kernel with all debugging disabled, and it brought me
1.5MBps, so it is still consistently 1MBps slower than on sil.
Disabling NCQ seems to improve concurrent access a bit (for which I have
no explanation), while slows down single drive scenario:
With NCQ:
1TB alone: 81.22, 79.86
1TB+1TB: 56.28+56.70, 53.51+56.11
Without NCQ:
1TB alone: 79.78, 80.82
1TB+1TB: 57.99+58.12, 56.50+56.46
3512 sil, no NCQ:
1TB alone: 82.28, 82.18
1TB+1TB: 47.20+47.54 # Here apparently command based switching or
1.5Gbps link between device and PMP becomes bottleneck
And it seems that I observe what other poster pointed out - that
apparently all SiI chips are limited somewhere around 120-130MBps, and
cannot do more even if you pretty ask...
Petr
-
To unsubscribe from this list: send the line "unsubscribe linux-ide" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html