Re: FCOE lab equipment

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Nab,
Hello RMD,

* Nicholas A. Bellinger <nab@xxxxxxxxxxxxxxx> [2013-11-04 20:03]:
> The Intel 82599 (x520) NICs work well, and also have some built-in
> FCoE offloads. FYI, they are particularly sensitive wrt to optical
> GBICs, so I'd very much recommend using copper direct attached cables
> if at all possible. 

thank you for the confirmation and the word of warning about optical
links.

> That's not exactly true. It's still completely possible to run FCoE
> over normal 1 Gb/sec ports in VN2VN mode (eg: point to point) without
> Lossless ethernet.

I was not aware of this and will try it. I assume the ESX server does
not support 1 GBit NICs for software FCOE.  But Linux has a FCOE
initiator and target. :-)

* Rustad, Mark D <mark.d.rustad@xxxxxxxxx> [2013-11-04 21:25]:
> Packet loss is extremely disruptive to FCoE. It is true that DCB is
> not required - link flow control can be used instead - but the
> environment for the FCoE traffic really should be lossless.

I see. I was really not aware that DCB is _not_ required.

> Link flow control has limitations that can result in deadlocks, which
> is at least one reason DCB was created. You can reduce the likelihood
> of such deadlocks in a link flow control environment by dedicating an
> interface to your FCoE traffic with link flow control enabled, and
> running other traffic through a separate physical link either with or
> without link flow control on that traffic.

That makes sense. Thank you for pointing that out.

> It should work over a layer 2 hop, but you do need to know how your
> switches behave with regards to flow control. DCB-enabled switches
> that permit the use of PFC (Priority-based Flow Control) should do ok
> here.

I see.

> I'm afraid I am not quite an expert here. I don't know about degrees
> of interoperability between switches and such. I know enough to worry
> about it though. I would be very wary of attaching some low-end switch
> to a high-end switch with any expectation of getting lossless behavior
> with such a setup. A low-end switch with just link flow control might
> work in some small environments, however.

I read a lot of HP training material on the topic and did some research
and also experienced it in the field while installing Cisco UCS 1st
generation and HP blades for customers. However they only used FCOE
internally or till Top of the Rack Switches and than switched to FC. I
heard from an HP employee that they soon will release a firmware update
for there Flex 10 switch to support FCOE with multiple layer 2 hops, but
have not seen it yet.

> We can try. It can get very complicated. If you really need storage
> access across a network that you cannot make lossless, I would not use
> FCoE but rather iSCSI.

I could not more agree and I think for the most part I'll stick with
iSCSI, however I would like to get some hands-on experience with FCOE
and probably I'll experiment a little bit both with 1 Gbit and 10 Gbit
FCOE. I think that DCB switches are currently simply to expensive just
to get some hands-on-experience.

Thank you two for all the insight and tips on the topic. I'll report
back if I hit any road bumps.

Cheers,
        Thomas
--
To unsubscribe from this list: send the line "unsubscribe target-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux SCSI]     [Kernel Newbies]     [Linux SCSI Target Infrastructure]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Device Mapper]

  Powered by Linux