RE: [Open-FCoE] Need a little help with a vn2vn target & initiator setup

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Not sure how did you create the FCoE interface.  For vn2vn connection, you may have two ways to do it:
1. back to back between two ports.  I think it has worked for you.
2. Through a vn2vn abled switch, meaning the switch supports this feature.  It is likely the switch requires vlan for vn2vn connection. In this case, you will need create a vlan interface on both interfaces of the initiator and the target, before creating vn2vn interface.  I recommend you check with switch vendor for vn2vn support first.  If it supports, check what switch configurations are required.  You can then configure the switch accordingly.  On the initiator and target side, you do not have special things to do other than creating vn2vn interface.  

Thanks, Ken

-----Original Message-----
From: devel-bounces@xxxxxxxxxxxxx [mailto:devel-bounces@xxxxxxxxxxxxx] On Behalf Of Andrew Theurer
Sent: Friday, August 17, 2012 6:54 PM
To: Nicholas A. Bellinger
Cc: target-devel@xxxxxxxxxxxxxxx; openfcoe-devel
Subject: Re: [Open-FCoE] Need a little help with a vn2vn target & initiator setup

On Fri, 2012-08-17 at 18:17 -0700, Nicholas A. Bellinger wrote:
> On Fri, 2012-08-17 at 14:02 -0500, Andrew Theurer wrote:
> > Hello,
> > 
> > First, thanks for this great project!  We have been using the FC 
> > target for a few months now for KVM disk IO scalability analysis, 
> > and it has been a great resource for us!  We're able to do >1.4 M 
> > IOPS with multiple VMs via PCI-pass-through, and we're now testing 
> > virtio scalability enhancements, so it's been an incredibly useful 
> > feature for us!
> > 
> 
> Thanks for sharing Andrew!  Glad to see others are also pushing > 1M 
> IOPs workloads with the mainline kernel target.  Out of curiosity, 
> would you mind sharing how many LUNs you needed to push 1.4M IOPs..?

We used 7 scsi target hosts, each with a QLogic dual port 8 Gbps adapter.  Each server had 8 LUNs which were ramdisks.  We could get
100,000 IOPs , 50%/50% read/write with 8K request size, per 8Gbps interface.  The servers were dual socket Intel Nehalem variety.  The CPU usage was quite low, but I do not recall exactly what it was.

The initiator host is a single IBM x3850X5, which has 4 Intel Westmere-EX processors and 7 PCI slots, which were all filled with the same QLogic adapters.

> 
> Also when you say the 'FC Target', I assume you mean tcm_fc(FCoE) 
> fabric driver, yes..?

Actually the tcm_qla2xxx for the above configuration.  tcm_fc is what we want to start testing.
> 
> > We are now trying to create a test-bed with FCoE, with only FCoE 
> > targets and initiators (no FCF's).  For the moment, I am trying a 
> > directly-connected 82599EB adapters from two systems (one the 
> > target, the other the initiator).  These interfaces are configured 
> > for IP and ping-able.  I created vn2vn FC ports and I now have 1 
> > fc_host per system.  I have created a target on one via targetcli with a single LUN:
> > 
> > 
> > > /> ls
> > > o- / ......................................................................................................... [...]
> > >   o- backstores .............................................................................................. [...]
> > >   | o- block .................................................................................... [0 Storage Object]
> > >   | o- fileio ................................................................................... [1 Storage Object]
> > >   | | o- lun1 ..................................................................... [/tmp/lun1.img (1.0G) activated]
> > >   | o- pscsi .................................................................................... [0 Storage Object]
> > >   o- loopback ........................................................................................... [0 Target]
> > >   o- tcm_fc ............................................................................................. [1 Target]
> > >     o- 20:00:00:1b:21:4b:0a:0e ........................................................................... [enabled]
> > >       o- acls .............................................................................................. [1 ACL]
> > >       | o- 20:00:00:1b:21:67:5f:2a .................................................................. [1 Mapped LUN]
> > >       |   o- mapped_lun1 ................................................................... [lun1 fileio/lun1 (rw)]
> > >       o- luns .............................................................................................. [1 LUN]
> > >         o- lun1 
> > > ..................................................................
> > > .... [fileio/lun1 (/tmp/lun1.img)]
> > 
> > The initiator has a port name of 0x2000001b21675f2a and both target 
> > and initiator ports are "Online".  However, after re-scanning for 
> > devices, the initiator does not find any new LUNs.
> > 
> > I tried to do a fcping, but I can only successfully ping a FC ID, 
> > and not a port name:
> > 
> > > [root@spv-21 ~]# fcping -c 3 -h eth6 -F 0x000a0e sending echo to 
> > > 0xA0E
> > > echo    1 accepted                        0.468 ms
> > > echo    2 accepted                        0.428 ms
> > > echo    3 accepted                        0.462 ms
> > > 3 frames sent, 3 received 0 errors, 0.000% loss, avg. rt time 
> > > 0.453 ms
> > 
> > > [root@spv-21 ~]# fcping -h eth6 -N 0x2000001b214b0a0e GID_NN 
> > > error: Invalid argument cannot find fcid of destination @ wwnn 
> > > 0x2000001B214B0A0E
> > 
> > I am wondering if there's still a connectivity problem.  [Not 
> > knowing much about the world of FC] Is there some sort of 
> > wwnn-to-fcid mapping that I am missing?  Or maybe something else?
> > 
> 
> Mmmm, not sure what is going on here with this particular setup.
> 
> CC'ing MDR & Kiran @ Intel + OpenFCoE list who will have a better idea 
> how to start debug this..

Thanks.  The open-fcoe dev list has been incredibly helpful to me recently.  I though this might have been a target specific issue, but the more the merrier!

-Andrew

_______________________________________________
devel mailing list
devel@xxxxxxxxxxxxx
https://lists.open-fcoe.org/mailman/listinfo/devel
--
To unsubscribe from this list: send the line "unsubscribe target-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux SCSI]     [Kernel Newbies]     [Linux SCSI Target Infrastructure]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Device Mapper]

  Powered by Linux