Re: iSCSI GFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> iSCSI is just a connection protocol like Fibre Channel. They both do the
> same thing. iSCSI works over ethernet, while FC works over fibre. iSCSI
> is cheaper, and FC has traditionally been faster (although the point
> gets a bit moot with 1Gb and 10Gb ethernet as the storage stops being
> the bottleneck.

Right, I know they are similar but the big difference seems to be that it is 
simpler than FC. What I don't like about FC right now is that the moment I 
make any changed on the storage, things get really complicated. For some 
reason, after I take storage away to replace it with something different, 
Linux goes haywire on me. I've posted about that one before, just a long 
stream of long gone storage information that goes across the console on bootup 
and a lot of old information sitting in the /etc/lvm directory. Just get's 
darn messy and since I'm not a pro at any one thing, including FC, it starts 
confusing me when things get funky, which equals, down time. 

I replaced the drives in a storage chassis a few days ago. Seems I ended up 
having to do a lot of things I probably didn't need to and ended up with down 
time. 

I unmounted the shared storage from each node then took them out of the 
cluster because they cannot be live when FC changes happen, they don't seem to 
handle FC changes well. I then shut down the storage, replaced the drives, 
reformatted, ready to go. I restarted each node so that it would cleanly see 
the new storage yet tons of old garbage flew across the screen. 

I've gotten into LUN zoning, fibre zoning, it just gets ugly. From what I can 
tell of iSCSI, you just put some storage on the network, assign it an IP and 
it's all IP simple. What I don't get is the difference between iSCSI and 
NAS/DAS yet. Both seem to be independent of a server while iSCSI seems to sit 
behind the server unless it is a specialty device. 
 
> iSCSI and FC are equivalent. I personally prefer iSCSI because it's
> cheaper. In terms of features there isn't a great deal to choose between
> them.

And that's another thing. Having to find FC drives at a good price all the 
time get's mighty tiring while watching huge IDE/SATA drives going so cheap. 
It seems to make more sense to move to something that will allow me to use the 
newer cheaper yet very fast drives. 

What I wanted to build was something which would be simple to grow. I like FC 
now that I've been learning it but it seems that I should be looking at other 
options while I'm early in the game. 

I seem to read a lot of posts from folks who talk about assigning an IP to 
something and it's there on the LAN ready for access. I understand how I can 
grow my storage using FC but I'm not quite understanding these other 
technologies. It seems that you simply add storage as needed, add an IP, it's 
available. What I don't get is how all of that individual storage can be used 
in some aggregate manner. 

These days, it's all about content driven stuff, lot's of audio and big video 
files. I don't get how the servers manage to see what they need if it is 
spread across dozens of storage devices on the IP side.

> http://sourceware.org/cluster/ddraid/
> I stumbled upon it last night, and the ides seems great - network RAID
> 3.5 (n+1 like RAID 3,4,5). It seems to make sense for small-ish
> clusters, or situations where you are stacking RAID / cluster levels

I used to run an ISP in the early/mid 90's and what kept us awake and freezing 
in the server room for days sometimes was growth. We didn't have all of the 
cool things we have now so everything was new technology. The problem was as 
we would grow, we would have to change almost everything out every couple of 
years almost. It was a nightmare, constantly having to upgrade. 

In my current venture, I'd like to be ready for growth by having a good solid 
solution up front. One that will last me into any growth at least in as far as 
having to add serving resources such as storage and highly reliable LAMP based 
services. 

Lot's of good feedback again, I sense I'll be trying some new things soon. 
Can't wait for the day where I know enough that I can reply to questions with 
the answers people are looking for :).

Thanks folks.

Mike



--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux