Re: Home-brew SAN/iSCSI

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Oct 10, 2009 at 09:05:32PM +0100, Corey Kovacs wrote:
>    One thing to keep in mind is the fact that iscsi is a tcp based protocol.
>    So even though your machine might be doing nothing but acting as an iscsi
>    target, it's going to take the brunt of the load in handling the tcp
>    stack. If you can get a network card that handles iscsi on the card
>    itself, that will help loads. Otherwise your cpu might dig a hole for
>    itself to crawl into.
> 

I don't think iSCSI HBA drivers to use in the _target_ are publicly
available. iSCSI HBA's in the initiator (client) are supported of course.

Then again most NICs nowadays offload TCP/IP, and most can also offload iSCSI.. 
HBAs are getting legacy stuff.

>    Of course if your just messing about, or only using the iscsi targets
>    locally, then your probably ok.
> 
>    Benefits of a dedicated device are management capabilities, throughput,
>    flexible location, etc. Fibre channel is 8Gb standard now and SAN's are
>    starting to use it instead of 4Gb, but the entry point in terms of cost is
>    high. A fully loaded EVA8100 can cost 250k, the FC infrastructure can go
>    to 60-80k easily. iscsi really needs to have a seperate back end storage
>    network to be useful and it should be 10Gb. I hear people say it's useful
>    on slower hardware but everyone has an opinion. I guess if your just using
>    it for system volumes and low IO then 1G might be fine.
> 

1G iSCSI works very well for many workloads, depending mostly on your
storage/target setup.

1G link can handle a lot of random IOs.. you're most probably limited by
the amount of disk spindles anyway.

FC is getting legacy aswell.. IMHO :)

-- Pasi

>    Anyway hope this help and if it doesnt' at least it might give you more to
>    think about.
> 
>    Best of luck
> 
>    Corey
> 
>    On Sat, Oct 10, 2009 at 8:41 PM, Madison Kelly <[1]linux@xxxxxxxxxxx>
>    wrote:
> 
>      Andrew A. Neuschwander wrote:
> 
>        Madison Kelly wrote:
> 
>          Hi all,
> 
>           Until now, I've been building 2-node clusters using DRBD+LVM for
>          the shared storage. I've been teaching myself clustering, so I don't
>          have a world of capital to sink into hardware at the moment. I would
>          like to start getting some experience with 3+ nodes using a central
>          SAN disk.
> 
>           So I've been pricing out the minimal hardware for a four-node
>          cluster and have something to start with. My current hiccup though
>          is the SAN side. I've searched around, but have not been able to get
>          a clear answer.
> 
>           Is it possible to build a host machine (CentOS/Debian) to have a
>          simple MD device and make it available to the cluster nodes as an
>          iSCSI/SAN device? Being a learning exercise, I am not too worried
>          about speed or redundancy (beyond testing failure types and
>          recovery).
> 
>          Thanks for any insight, advice, pointers!
> 
>          Madi
> 
>        If you want to use a Linux host as a iscsi 'server' (a target in iscsi
>        terminiology), you can use IET, the iSCSI Enterprise Target:
>        [2]http://iscsitarget.sourceforge.net/. I've used it and it works
>        well, but  it is a little CPU hungry. Obviously, you don't get the
>        benefits of a hardware SAN, but you don't get the cost either.
> 
>        -Andrew
> 
>      Thanks, Andrew! I'll go look at that now.
> 
>       I was planning on building my SAN server on an core2duo-based system
>      with 2GB of RAM. I figured that the server will do nothing but
>      host/handle the SAN/iSCSI stuff, so the CPU consumption should be fine.
>      Is there a way to quantify the "CPU/Memory hungry"-ness of running a SAN
>      box? Ie: what does a given read/write/etc call "cost"?
> 
>       As an aside, beyond hot-swap/bandwidth/quality, what generally is the
>      "advantage" of dedicated SAN/iSCSI hardware vs. white box roll-your-own?
> 
>      Thanks again!
> 
>      Madi
>      --
>      Linux-cluster mailing list
>      [3]Linux-cluster@xxxxxxxxxx
>      [4]https://www.redhat.com/mailman/listinfo/linux-cluster
> 
> References
> 
>    Visible links
>    1. mailto:linux@xxxxxxxxxxx
>    2. http://iscsitarget.sourceforge.net/
>    3. mailto:Linux-cluster@xxxxxxxxxx
>    4. https://www.redhat.com/mailman/listinfo/linux-cluster

> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux