Re: iSCSI GFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> It's not virtualization. It is equivalent to mounting an NFS share, and
> then exporting it again from the machine that mounted it.

Ok, so a single machine where all storage is attached to it. Won't that bog it 
down big time pretty quickly? 

So, if this is correct, I can see how I could export everything from that one 
machine but overall I/O would be unreal? 

How would this machine be turned into an aggregator? Would it handle knowing 
where everything is or would servers still need to know which share to connect 
to in order to get the needed data?

I also happen to have a BlueArc i7500 machine which can offer up NFS shares. I 
didn't want to use anything like that because I've read too many message about 
NFS not being a good protocol to grow on. Do you disagree?

> Exactly. You have a machine that pretends to be a SAN when it in fact
> has no space on it. Instead, it connects to all the individual storage
> nodes, mounts their volumes, merges them into one big volume, and then
> presents that one big volume via iSCSI.

Ok, I like it :). I don't get how I aggregate it all into a single volume, 
guess I've not played with software RAID which expands to different storage 
devices and volumes. I get the idea though.

For hardware, would this aggregator need massive resources in terms of CPU or 
memory? I have IBM's which have 8-way CPU's and can have up to 64GB of memory. 

Would the aggregator be a potential cluster candidate perhaps? Might it be 
possible to run a cluster of them to be safe and to offload? 

This is interesting. I can see that if I could get to VM/shareroot and 
something like this, I would have something quite nice going.

> It's a central connection point AND a router, only it isn't just
> straight routing, because the data is RAID striped for redundancy.

Right, I just don't yet get how the aggregator handles all of that I/O. Or 
perhaps it just tells the servers which storage device to connect to so that 
it doesn't actually have to take on all of the I/O?

Mike



--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux