Hello Harald!
I didn't test Infiniband transport until now, as I don't want to
interfere with the parallel applications which are running over
Infiniband. Gigabit Ethernet throughput would be sufficient for us at
the moment.
Today "only" three nodes were affected, yesterday it were nine nodes.
The problems only occur on nodes to which jobs are scheduled which
use /scratch as working directory: We test the filesystem in normal
operation, one user submits jobs to the queueing system which use /
scratch/... as working directory. While some of his jobs run without
problems, other jobs fail due to FS problems. No problems occur over
the usual NFS home directory.
When I test the FS with, e.g., dd on all nodes in parallel, no
problems occur.
Which timeout shall I increase?
Regards,
Fred
On 25.11.2008, at 13:18, Harald Stürzebecher wrote:
Hello!
<Disclaimer>
I'm just a small scale user with only a few month experience with
GlusterFS so my conclusions might be totally wrong.
</Disclaimer>
2008/11/25 Fred Hucht <fred@xxxxxxxxxxxxxx>:
Hi devels!
We consider GlusterFS as parallel file server (8 server nodes) for
our
parallel Opteron cluster (88 nodes, ~500 cores), as well as for a
unified
nufa /scratch distributed over all nodes. We use the cluster within a
scientific environment (theoretical physics) and use Scientific
Linux with
kernel 2.6.25.16. After similar problems with 1.3.x we installed
1.4.0qa61
and set up a /scratch for testing using the following script
"glusterconf.sh" which runs local on all nodes on startup and
writes the two
config files /usr/local/etc/glusterfs-{server,client}.vol:
[...]
The cluster uses MPI over Infiniband, while GlusterFS runs over TCP/
IP
Gigabit Ethernet. I use FUSE 2.7.4 with patch fuse-2.7.3glfs10.diff
(Is that
OK? The patch succeeded)
Interesting setup, not using Infiniband for GlusterFS. The GlusterFS
homepage says "GlusterFS can sustain 1 GB/s per storage brick over
Infiniband RDMA". Personally I'd like to know if you did try it at
some time and chose not to use it?
Everything is fine until some nodes which are used by a job block
on access
to /scratch or, sometimes later, give
df: `/scratch': Transport endpoint is not connected
The glusterfs.log on node36 is flooded by
[...]
On node68 I find
[...]
The third affected node node77 says:
[...]
As I said, similar problems occurred with version 1.3.x. If these
problems
cannot be solved, we have to use a different file system, so any
help is
very appreciated.
If I read that correctly, there are only three nodes out of 88
affected by this problem. In that case I think I'd look for hardware
problems first. Do you have an easy way to check your network
connections for e.g. packet loss.
Increasing timeouts might help until the real problem can be found
and fixed.
Additionally, I'd like to suggest running a test using Infiniband - if
possible - to rule out any Ethernet-related problems.
Harald Stürzebecher
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel
Dr. Fred Hucht <fred@xxxxxxxxxxxxxx>
Institute for Theoretical Physics
University of Duisburg-Essen, 47048 Duisburg, Germany