Peter Blajev wrote:
I have 4 systems and each one of them has a partition I'd like to be remotely
accessible on the other 3 systems.
In other words System1 has Partition1. Systems 2,3,4 should be able to
remotely mount Partition1 from System1. Also System2 has Partition2. Then
systems 1,3,4 should be able to remotely mount Partition2 from System2 and so
on.
I tried NFS and it works but only in the ideal world. If one of the systems
goes down the whole NFS cross-mounting makes the other systems somewhat
unstable. It's a known issue and I believe you guys are aware of it but I
just had to see it myself.
What would you recommend? What is the best practice for doing that?
Unfortunately SAN and NAS are not really an option due to some financial
restructions. I'm thinking SMB...? Would that work?
if system 1 depends on system 2, AND system 2 depends on system 1, I
dunno, but you're asking for problems.
the normal way people do this is to designate a SERVER, and have all the
other systems mount data off this server. the server should be
designed and used to maximize uptime.
SMB is a Microsoft Windows protocol, and rather foreign to Unix/Linux,
fine for linux<->windows use, but no good for linux<->linux.
NAS is simply a turnkey NFS fileserver.
SAN is a block storage setup, and doesn't itself allow for sharing
between systems, except in specific cluster configurations. [SAN
disks]--san---[server system]----NFS----[client systems] would be what
you'd end up with
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos