Hi, On Sat, 10 Oct 2009 15:41:33 -0400, Madison Kelly wrote > Andrew A. Neuschwander wrote: > > Madison Kelly wrote: > >> Hi all, > >> > >> Until now, I've been building 2-node clusters using DRBD+LVM for the > >> shared storage. I've been teaching myself clustering, so I don't have > >> a world of capital to sink into hardware at the moment. I would like > >> to start getting some experience with 3+ nodes using a central SAN disk. > >> > >> So I've been pricing out the minimal hardware for a four-node > >> cluster and have something to start with. My current hiccup though is > >> the SAN side. I've searched around, but have not been able to get a > >> clear answer. > >> > >> Is it possible to build a host machine (CentOS/Debian) to have a > >> simple MD device and make it available to the cluster nodes as an > >> iSCSI/SAN device? Being a learning exercise, I am not too worried > >> about speed or redundancy (beyond testing failure types and recovery). > >> > >> Thanks for any insight, advice, pointers! > >> > >> Madi > >> > > > > If you want to use a Linux host as a iscsi 'server' (a target in iscsi > > terminiology), you can use IET, the iSCSI Enterprise Target: > > http://iscsitarget.sourceforge.net/. I've used it and it works well, but > > it is a little CPU hungry. Obviously, you don't get the benefits of a > > hardware SAN, but you don't get the cost either. > > In addition to IET and iSCSI client you may want to take a look at multipath too. I am also testing a home-brew SAN setup with two storage machines and replicated via DRBD GFS2 partition (Primary/Primary) which in turn is exported from both via iSCSI and then imported on the nodes with multipath to both storages - the idea is to have the performance from both and no single point of failure for the storage too. > > -Andrew > > Thanks, Andrew! I'll go look at that now. > > I was planning on building my SAN server on an core2duo-based system > with 2GB of RAM. I figured that the server will do nothing but > host/handle the SAN/iSCSI stuff, so the CPU consumption should be fine. > Is there a way to quantify the "CPU/Memory hungry"-ness of running a SAN > box? Ie: what does a given read/write/etc call "cost"? > > As an aside, beyond hot-swap/bandwidth/quality, what generally is the > "advantage" of dedicated SAN/iSCSI hardware vs. white box roll-your-own? > > Thanks again! > > Madi > > -- > Linux-cluster mailing list > Linux-cluster@xxxxxxxxxx > https://www.redhat.com/mailman/listinfo/linux-cluster -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster