Re: ATA-over-Ethernet v's iSCSI -- CORAID is NOT SAN, also check multi-target SAS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



> [ I'm resending this to the list ]
> 
> Nick Bryant wrote:
> > Just wondering if anyone out there has any opinions or
> > better still experience with Coraid's AoE products in a
> > centos environment?
> 
> I have been bombarded with CORAID marketing in various Linux
> groups over the last few months.  This is because CORAID
> smartly hit the Linux tradeshow circuit, which is definitely
> the best way to get free "word of mouth" advertising.

Indeed - that's how I found out about them.

> 
> I have seen their products.
> I have read their technical specifications.
> And I have come to one conclusion.
> 
> AoE is vendor marketing.
> It is NOT SAN of any sort.
> It relies 100% on server-wide (e.g., GFS) coherency.
> Or you merely use it as dedicated storage (even if you slice
> for different servers).
> 
> - The "efficiency" argument
> 
> Sure, it has less overhead than iSCSI.  That means if you put
> in a cheap, $50 GbE card or use the on-motherboard GbE NIC,
> you're going to get better performance.
> 
> But anyone who is serious about a SAN puts in a $500 iSCSI
> HBA GbE, which are very affordable now.
> 

It's not *just* the cost of the HBA though - the storage device itself is
quite a bit more expensive.

> - The "feature" reality
> 
> AoE has virtually _no_ features.  It's "dumb storage" and
> that's that.  iSCSI is intelligent.  How intelligent your
> target acts is up to you (and the cost).  It's multi-vendor
> and there are complete standards, from host to target --
> especially intelligent targets.  AoE just a simplistic block
> interface that relies on 100% software coherency.
> 
> > We're looking at developing a very basic active/standby
> > 2 node cluster which will need a shared storage component.
> 
> AoE is _not_ it then.  AoE does _not_ allow shared storage.
> You must slice the array for each system so they are
> _independent_.  It is _not_ a SAN.  It does not have
> multi-targetting features, only segmented target capability.

Ok forgive me for my knowledge of SANs isn't great, but I thought if you
were using a SAN that represents itself as a block device (a real SAN) that
only one machine could have true read/write access to the "slice"? That was
unless you used a file system like GFS (I wasn't intending too). When I said
shared storage I didn't mean it had to be accessed at the same time from all
hosts. The RHEL cluster suite in an active/standby setup actually mounts the
partitions as a host changes from standby to active after its sure the
active host hasn't got access anymore with a "lights out" OoB setup. 

Well that was my understanding of how it worked anyhow?

> 
> They talk GFS.  It's GFS in the worst absolute setup -- 100%
> software, 0% intelligent target.  ;->
> 
> > It would be a huge bonus if other servers could also use
> > the storage device.
> 
> Now it can do that.  You just slice your array for whatever
> servers.
> 
> > Originally I was looking at the Dell/EMC iSCSI solution as
> > it's a cheaper solution than fibre channel.
> 
> Have you considered Serial Attached SCSI (SAS)?
> Most people haven't heard of it, so be sure to read my blog
> from 3 months back:
> http://thebs413.blogspot.com/2005/08/serial-storage-is-future.html

I have not but I'll be sure to check it out now... my worry is that out of
the vendors I'm talking to (Acer, Dell, HP and Sun) no one offered it up.


> 
> 12-24Gibps (1.2-2.4GiBps after 10/8 encoding) using a 4 or 8
> channel trunk, which is what external target solutions are
> using.  SAS is SCSI-2 over Serial (physically, basically same
> as SATA, with twisted pair which SATA-IO also requires).
> It's very scalable and flexable and _damn_fast_.  In a
> nutshell, SAS can use ...
> - Internal SATA drives
> - Internal SAS drives
> - External SAS drives
> - External SAS enclosures (with SAS drives)
> - External SAS subsystems (with SATA or SAS drives)
> - External SAS hubs (intelligent multi-targetting)
> 
> So what's the catch of SAS?  Same as SCSI:
> 1.  Few people go for the multi-target options
> 2.  Shorter distance (8m ~ 25')
>

The distance does limit the flexibility but can be worked around.

> #2 is not an issue if you're providing storage for the
> closet.  That was always the bonus with multi-target,
> intelligent target, SCSI, before higher-speed FC-AL was
> available.
> 
> #1 is where SAS is still "getting off the ground."  It
> leverages existing SCSI-2, so the multi-targetting is there.
> But the vendor products are still coming out.
> 
> So the verdict is still out on SAS as a SAN solution merely
> because of the limited products available right now.  But
> it's definitely far more affordable than FC-AL, leverages
> everything learned with multi-target SCSI-2 and is a heck of
> a lot better for the closet than iSCSI (where distance isn't
> a factor).
> 

Definitely one to look at in the future then.

> > However, the performance issues without using a TCP
> > offload HBA are a bit of a concern.
> 
> If you go iSCSI, you _must_ go with a HBA.  Heck, I would
> argue that you would probably want a HBA for layer-2 Ethernet
> too, although the layer-3/4 traffic is far worse.
> 

Good to know.

> But HBAs start at $500 these days.  That's chump change.
> There are some outstanding iSCSI HBAs under $1,000, so that
> shouldn't deter you.
> 

That's the problem, this "chump" is paying for it himself... sadly I know
longer have bank/telco budgets to play with :( But still 500USD isn't really
bad.

> An intelligent, multi-targetted subsystem is where the cost
> is going to be.  And that's where multi-target SAS devices,
> once they become more commonplace, should be significantly
> cheaper than iSCSI (before figuring disk cost).
> 
> > Then I found the Coraid (www.coraid.com) products based on
> > the open standard AoE protocol.
> 
> It's _their_ standard.  And its _empty_.  It has _no_
> SAN-level code.  There is _no_ multi-targeting logic.  You
> have to slice the array -- only 1 connection per end-user
> volume.
> 

I know they made the standard but it was my understanding it was open now?
Whether or not other vendors will use it remains to be seen though.

> > It's got a number of benefits including: price,
> 
> So does SAS, and it uses the proven SCSI-2 protocol for
> multi-targetting.
> 
> > less protocol overhead for the server
> 
> SCSI-2 is better than layer-2 Ethernet, let alone designed
> for storage.  ;->
> 
> > and the ability to use any disks where as "enterprise"
> > approved products form the likes of Dell/Sun etc
> > only support 250gb sata disks at the moment.
> 
> Not true!  There are 400 and 500GB near-line 24x7 disks from
> Seagate and Hitachi, respectively.
> 
> [ Sounds like someone fed you marketing. ;-]

I'm aware they exist but go and try and product from dell with anything
larger than a 250gb sata disk in it. Good luck ;) If you ask you'll be told
that the larger disks haven't yet been approved in the enterprise type
systems yet, but I imagine part of it will be they don't want to cannibalise
part of their SCSI market by offering products with a *much* loser cost per
GB, well not yet anyhow.

> 
> In fact, one of Hitachi's big partners is Copan Systems, and
> they have the 500GB drives in their VTL solution.
> 
> > I guess my concern is that it's a new technology that's
> > not been widely adopted so far and all the issues that go
> > along with that.
> 
> That's the _least_ of your concerns.  AoE has _nothing_ in it
> from a SAN perspective.
> 
> I sure wish CORAID was more forthcoming on that.
> 
> At the same time, whenever I mention multi-targetting the
> same volume, it finally shuts up the marketeers.  It's all
> hype, 0 SAN substance.

Good to know - thanks.

> 
> 
> Dave Hornford <OSD@xxxxxxxxxxxxxxxxxxxxxx> wrote:
> > What are you planning on running over the shared
> > connection? Database, eMail, File Shares? How many users?
> > How much data? What is your I/O profile?
> 

Mainly file shares with some imaging and D2D server backup. We have about
70000 users. Upto 2tb right now... but the abilty to grow it in the future
would be great. The I/O profile is low in terms of throughput and the data
isn't massively urgent. However, the CPUs on the servers are more important
and I don't want to load them too much in the process (hence my fear of a
non iSCSI HBA).

> Agreed.  If you can afford the latency and lower DTR, iSCSI
> will do.  If you need maximum performance, really investigate
> SAS.
> 
> > I've worked with 'enterprise' storage most of my career
> > either as a consumer, adviser or provider - can't comment
> > on AoE other than to suggest you look at what are the
> > business & technical goals, how they solve it and what is
> > your risk profile against the business need of the system.
> > (I'm currently working as an adviser)
> 
> AoE is _not_ a SAN solution.
> That 1 statement remove any consideration.

I'm hearing that :) glad I asked.

> 
> At the same time, I haven't deployed a multi-targetted SAS
> solution yet.  The boards are out, the drives are out, the
> enclosures are out, and some subsystem products exist.  Just
> because it leverages existing SCSI-2 multi-targetting doesn't
> mean someone has a well-designed, well-respected, intelligent
> SAS multi-targettable/sharable solution yet.
> 
> > When you have an opportunity to chase into "enterprise
> > support" of new disk is usually comes down to insufficient
> > time for testing, or known problems (heat, torque,
> > variability in sample, failure rate & edge-case
> > incompatibility with previously certified products are
> > normal)
> 
> AoE is designed for 1 thing -- centralized, segmented
> storage.  It does that well.  However, it is _not_ a SAN
> standard.  It does _not_ support multi-targetting of the same
> volume.

Again excuse my ignorance, does multi-targeting mean that two systems can
share a volume (r/w) without the use of GFS? According to redhat cluster
papers this isn't possible.

> 
> So that means it's no different than if you had local storage
> using 100% software (e.g., via GFS) to synchronize.  It is
> "dumb".
> 
> > You have hinted with the concern over TCP offload that you
> > may have higher-end performance needs and that this system
> > carries a high business value and needs a lower risk
> > solution.
> 

Indeed the system has a high business value. The servers that the system
runs on have a high business value (hence the cluster). However, performance
(in terms of I/O) isn't a massive issue. I was more concerned about that the
performance impact to the servers was. 

> Agreed.  If you only need storage in the same closet (same
> 25'), see what intelligent, multi-target SAS solutions are
> available.
> 
> > Remember risk is a cost.
> 

Yes it's just a very easy one to not spend :) Especially when its your own
dollars.

Many thanks for the feedback.

Nick


[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux