Re: Dual target node ALUA multipathing for Vmware

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 09/15/2014 05:31 PM, Robert Wood wrote:
> Hi Mike,
> 
>> If I understand you correctly, The following configuration must be present 
>> when the LUN is active on node e1 :
>>
>> Node e1 (LUN is a member of both groups):
>> TPG name: e1, TPG ID = 1, ALUA state: Active/Optimized
>> TPG name: kio1, TPG ID = 2, ALUA state: Standby
>>
>> Node kio1 (LUN is *not* a member of either group):
>> TPG name: e1, TPG ID = 1, ALUA state: Active/Optimized
>> TPG name: kio1, TPG ID = 2, ALUA state: Standby
>>
>> And this would be the change when we want LUN to switch to node kio1 (or 
>> node e1 crashes):
>>
>> Node e1 (if present)(LUN is *not* a member of either group):
>> TPG name: e1, TPG ID = 1, ALUA state: Standby
>> TPG name: kio1, TPG ID = 2, ALUA state: Active/Optimized
>>
>> Node kio1 (LUN is a member of both groups):
>> TPG name: e1, TPG ID = 1, ALUA state: Standby
>> TPG name: kio1, TPG ID = 2, ALUA state: Active/Optimized
>>
> 
> OK, I think we have progressed in understanding what Vmware expects.  Vmware 
> basically does not care that we have Linux boxes with LIO delivering 
> storage.  In fact, it sees all ports as one big array delivering a given LU. 
> And Vmware expects to fail over from some TPG to another TPG upon failure of 
> all paths to a particular TPG.
> 
> Example: if nodes are e1 and kio1, TPGs are 0 and 1, S = standby, A = 
> Active/Optimized, (l) means LU is member and (n) means no members, then in 
> our initial state we have:
> 
> TPG -->>   0        1
> 
> kio1       S(l)     A(n)
> 
> e1         S(n)     A(l)
> 
> Then when we fail over, we just "catch" where Vmware is going to go with its 
> STPG request:
> 
> TPG -->>   0        1
> 
> kio1       A(l)     S(n)
> 
> e1         A(n)     S(l)
> 
> We believe this can be implemented with Pacemaker and will run that test in 
> a couple of days.  A manual test seems to work OK.
> 

Yes, this is what I was saying before. This brings us back to the
question about how you are going to handle that STPG.

For the STPG handling I am working on a patch, so when we get a a STPG,
we can call out to userspace and do whatever needs to be done. So if you
are using pacemaker, then you can do those cibadm commands to move the
resource to the other node.

Are you using a different method to catch the STPGs?
--
To unsubscribe from this list: send the line "unsubscribe target-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux SCSI]     [Kernel Newbies]     [Linux SCSI Target Infrastructure]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Device Mapper]

  Powered by Linux