The instances we use via Direct Connect (a third party company) have
upwards 20 disks and a total of 80T. That part is covered.
If we were to experiment with EBS, that would be a different case as
we'd need to stripe them.
Our present model requires one single namespace via NFS. The
Instances are running CentOS 6.x. The Instances mount the Direct
Connect disk space via NFS, the only other alternative we'd have is
iSCSI which wouldn't work for the level of sharing we need.
On 7/14/15 4:18 PM, Mathieu Chateau
wrote:
by NFS i think you just mean "all servers seeing
and changing sames files" ? That can be done with fuse, without
nfs.
NFS is harder for failover while automatic with fuse (no
need for dynamic dns or virtual IP).
for redundancy I mean : What failure do you want to survive
?
- Loosing a disk
- Filesystem corrupt
- Server lost or in maintenance
- Whole region down
Depending on your needs, then you may have to replicate
data accross gluster brick or even use a geo dispersed
brick.
Will network between your servers and node be able to
handle that traffic (380MB/s = 3040Mb/s) ?
I guess gluster can handle that load, you are using big
files and this is where gluster deliver highest output.
Nevertheless, you will need many disk to provide these i/o,
even more if using replicated bricks.
|
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users