Dear all,
Since a few days, some users raise IT issues concerning GlusterFS NFS
Shares and more specifically, in about NFS shares available on my HPC
clusters.
Indeed, from their workstations which, if they try to copy some
directories/files from the NFS share into somewhere else (taking place
in), they get some "Stale NFS file handle" errors.
Here is an example: my machine is promethee and the remote machine where
is located the NFS share is hades. the mount point of my NFS share in my
machine is /hades.
[me@promethee ~]$ cp -r /hades/Gquads/ /home/me/Gquads/
=> here all is OK
[me@promethee ~]$ cd /hades
[me@promethee ~]$ cp -r Gquads/ /home/me/Gquads/
cp: reading `2KF8/dihedrals/traj.pdb': Stale NFS file handle
cp: failed to extend `/data/pasquali/Gquads/2KF8/dihedrals/traj.pdb':
Stale NFS file handle
cp: cannot stat `2KF8/dihedrals/line_14.png': Stale NFS file handle
cp: cannot stat `2KF8/dihedrals/polar_log_14.png': Stale NFS file handle
cp: cannot stat `2KF8/dihedrals/polar_9.png': Stale NFS file handle
cp: cannot stat `2KF8/dihedrals/polar_log_13.png': Stale NFS file handle
cp: cannot stat `2KF8/dihedrals/line_15.png': Stale NFS file handle
cp: cannot stat `2KF8/dihedrals/polar_log_12.png': Stale NFS file handle
cp: cannot stat `2KF8/dihedrals/line_21.png': Stale NFS file handle
It looks like to have problems to solve relative paths...
For information: i use NFS export over my GlusterFS volume remote mount.
In other words: hades is a master whose the /home is a mount of a
GlusterFS volume. Here is my volume settings:
[root@hades ~]# gluster volume info vol_home
Volume Name: vol_home
Type: Distributed-Replicate
Volume ID: f6ebcfc1-b735-4a0e-b1d7-47ed2d2e7af6
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp,rdma
Bricks:
Brick1: ib-storage1:/export/brick_home/brick1
Brick2: ib-storage2:/export/brick_home/brick1
Brick3: ib-storage3:/export/brick_home/brick1
Brick4: ib-storage4:/export/brick_home/brick1
Brick5: ib-storage1:/export/brick_home/brick2
Brick6: ib-storage2:/export/brick_home/brick2
Brick7: ib-storage3:/export/brick_home/brick2
Brick8: ib-storage4:/export/brick_home/brick2
Options Reconfigured:
features.default-soft-limit: 90%
features.quota: on
diagnostics.brick-log-level: CRITICAL
auth.allow: localhost,127.0.0.1,10.*
nfs.disable: on
performance.cache-size: 64MB
performance.write-behind-window-size: 1MB
performance.quick-read: on
performance.io-cache: on
performance.io-thread-count: 64
[root@hades ~]# cat /etc/exports
/home *.lbt.ibpc.fr(fsid=0,rw,root_squash)
[root@lucifer ~]# mount |grep home
ib-storage1:vol_home.rdma on /home type fuse.glusterfs
(rw,default_permissions,allow_other,max_read=131072)
Any idea? Am I the only one with this problem?
Thanks in advance,
Geoffrey
--
-----------------------------------------------
Geoffrey Letessier
Responsable informatique
CNRS - UPR 9080 - Laboratoire de Biochimie Théorique
Institut de Biologie Physico-Chimique
13, rue Pierre et Marie Curie - 75005 Paris
Tel: 01 58 41 50 93 - eMail:geoffrey.letessier@xxxxxxx
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users