On Thu, 2014-01-09 at 09:26 +0100, Henrik Goldman wrote: > > Apologies for the delayed response, still catching up on post holiday > > items.. > > > > No issues. Nothing bad happened. :-) > > > In my experience with qla2xxx target mode on 25xx + 26xx series > > hardware, I've never seen half-dupex mode negotiated using either > > point-to-point or switched fabric configurations. > > > > We're using qle2460 4 gbit adapters due to it's ultra low cost and > while they seem to work very well then I am unable to get more than > 270 MB/s out of them when doing storage vmotion. > I should note that both storage arrays using target are certainly > capable of delivering more than 400 MB/s when doing local tests e.g. > dd copy. FYI, ESX uses /DataMover/MaxHWTransferSize to control the copy offload blocks per EXTENDED_COPY CDB (default is 4MB), and sometimes using a larger value (8MB or 16M) can have a effect of vmotion performance. > > Obviously the expectation is to get the same over FC as you get locally. > Note that FC host fabric throughput vs. array throughput may differ for LUNs within a single target array, as I/Os are generated locally to satisfy EXTENDED_COPY operations. > I did some tests from within VMware itself where a vm was running a > benchmark tool that was reading and writing in different block sizes. > This is the only time I've seen surpassing the 400 MB/s. <nod> > However I am still unsure if I am really getting what is expected. > For multi-array vmotion doing ESX Host I/O transfers, you'd expect this to be closer to local performance, yes. > Is there a way to verify in target or Linux overall what speed is set? > FC link speed is available via /sys/class/fc_host/host*/speed --nab -- To unsubscribe from this list: send the line "unsubscribe target-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html