Re: RAID performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/16/2013 11:19 AM, Adam Goryachev wrote:

> OK, I don't think any of this is going to work properly... I have 11 targets at the moment, so with two interfaces on the xen box, and 2 ip's on the san, it is going to have 4 paths per target. So I need 44 paths, but after 32 it times out all the rest of them when doing a session login. I don't see how to reduce this any further without going back to the old 1Gbps maximum performance level (and still use MPIO). I'd have to limit which targets can be seen so only a max of 8 can be seen by any host. This will only get worse if I manage to get it all working and then add more VM's to the system, I could easily end up with 20 targets.
...
> Well, I didn't even get to this... actually, exposing all 8 IP's on san1 produced 16 paths per Target, and I did see problems trying to get that working, which is why I dropped down to the 4 paths above.
...
> So it seems even this won't work, because I will still have 4 paths per target... Which brings me back, to square one....
> 
> I need both xen ports in a single bond, each group of 4 ports on san1 in a bond, this provides 2 paths per target (or san could be 8 ports in one bond, and xen could use two interfaces individually), and then I can get up to 16 targets which at least lets me get things working now, and potentially scales a little bit further.
> 
> Maybe it is unusual for people to use so many targets, or something... I can't seem to find anything on google about this limit, which seems to be pretty low :(

I regret suggesting this.  This can work in some scenarios, but it is
simply not needed in this one, whether you could get it to work or not.

> I don't understand this.... MPIO to all 8 ports would have scaled the best

*Theoretically* yes.  But your workload isn't theoretical.  You don't
design a network, user or SAN, based on balancing theoretical maximums
of each channel.  You design it based on actual user workload data
flows.  In reality, 2 ports of b/w in your SAN server is far more than
sufficient for your current workload, and is even sufficient for double
your current workload.  And nobody outside HPC designs for peak user
throughput, but average throughput, unless the budget is unlimited, and
most are not.

> However, using the 4 path per target method will limit performance depending on who those paths are shared with.

See above regarding actual workload data flows.  You're still in
theoretical land.

> Using balance-alb...

Forget using Linux bonding for SAN traffic.  It's a non starter.

> What am I missing or not seeing? I'm sure I'm blinded by having tried so many different things now...

You're allowing theory to blind you from reality.  You're looking for
something that's perfect instead of more than sufficient.

> I just don't see why this didn't work for me.... I didn't even find an option to adjust this maximum limit. I only assume it is a limit at this stage...

Forget it.  Configure the 2:2 and be done with it.

>>> I really need to get this done by Monday after today

One more reason to go with the standard 2:2 setup.

-- 
Stan

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux