isplist@xxxxxxxxxxxx wrote:
What I don't get is the difference between iSCSI and
NAS/DAS yet. Both seem to be independent of a server while iSCSI seems to sit
behind the server unless it is a specialty device.
NAS is just a server running NFS, CIFS, Coda or another network file system.
DAS (Direct Attached Storage?) probably refers to running shared storage
without a SAN (iSCSI/FC) appliance. You get the nodes themselves to
store and share the data between them, e.g. with DRBD/DDRaid/GNBD with
GFS on top. The disks live in the nodes, rather than in a SAN, but the
overall functionality is the same.
iSCSI and FC are equivalent. I personally prefer iSCSI because it's
cheaper. In terms of features there isn't a great deal to choose between
them.
And that's another thing. Having to find FC drives at a good price all the
time get's mighty tiring while watching huge IDE/SATA drives going so cheap.
It seems to make more sense to move to something that will allow me to use the
newer cheaper yet very fast drives.
FC will typically be what you use to talk to the chasis. The disks
inside will be the same as any other appliance, these days probably
SAS/SATA.
What I wanted to build was something which would be simple to grow. I like FC
now that I've been learning it but it seems that I should be looking at other
options while I'm early in the game.
For flexibility and cost, I'd go with iSCSI over ethernet. If you are
building your own SAN appliance with a Linux box, growing is very easy.
If you are putting disks into and using software RAID 5/6 you can just
add a disk and grow the stripe online. Then you up the size of the file
system, and you can create more iSCSI volumes or enlarge the existing
ones. There really is no scalability limit, over and above limits on the
file system sizes.
I seem to read a lot of posts from folks who talk about assigning an IP to
something and it's there on the LAN ready for access. I understand how I can
grow my storage using FC but I'm not quite understanding these other
technologies. It seems that you simply add storage as needed, add an IP, it's
available. What I don't get is how all of that individual storage can be used
in some aggregate manner.
Unless I'm mistaking, you are describing DAS here. If that is the case,
see above.
These days, it's all about content driven stuff, lot's of audio and big video
files. I don't get how the servers manage to see what they need if it is
spread across dozens of storage devices on the IP side.
If you need the ultimate flexibility, you could always have all the
storage exported via iSCSI to an aggregator box, have that run a
software RAID to link them all up, and then re-export that via iSCSI.
http://sourceware.org/cluster/ddraid/
I stumbled upon it last night, and the ides seems great - network RAID
3.5 (n+1 like RAID 3,4,5). It seems to make sense for small-ish
clusters, or situations where you are stacking RAID / cluster levels
I used to run an ISP in the early/mid 90's and what kept us awake and freezing
in the server room for days sometimes was growth. We didn't have all of the
cool things we have now so everything was new technology. The problem was as
we would grow, we would have to change almost everything out every couple of
years almost. It was a nightmare, constantly having to upgrade.
In my current venture, I'd like to be ready for growth by having a good solid
solution up front. One that will last me into any growth at least in as far as
having to add serving resources such as storage and highly reliable LAMP based
services.
It rather depends on what your requirements and constraints are. You can
do all sorts of clever things with a multi-level-tree organisational
design of your storage solution (something like having a separate
eggregation box as I described above), and it'll scale pretty much as
far as you want it. Just add an additional disk chassis into the aggregator.
Gordan
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster