On Wed, Jun 15, 2016 at 7:09 AM, Pepe Charli <ppcharli@xxxxxxxxx> wrote: > Hi, > > $ gluster vol info cfe-gv1 > > Volume Name: cfe-gv1 > Type: Distributed-Replicate > Volume ID: 70632183-4f26-4f03-9a48-e95f564a9e8c > Status: Started > Number of Bricks: 2 x 2 = 4 > Transport-type: tcp > Bricks: > Brick1: srv-vln-gfsc1n1:/expgfs/cfe/brick1/brick > Brick2: srv-vln-gfsc1n2:/expgfs/cfe/brick1/brick > Brick3: srv-vln-gfsc1n3:/expgfs/cfe/brick1/brick > Brick4: srv-vln-gfsc1n4:/expgfs/cfe/brick1/brick > Options Reconfigured: > performance.readdir-ahead: on > nfs.disable: on > user.cifs: disable > user.smb: disable > user.cifs.disable: on > user.smb.disable: on > client.event-threads: 4 > server.event-threads: 4 > cluster.lookup-optimize: on > cluster.server-quorum-type: server > cluster.server-quorum-ratio: 51% > > I did not see any errors in logs. > > I could move the file through an intermediate directory /tmp (not GlusterFS). > $ mv /u01/2016/03/fichero.xml /tmp > $ mv /tmp/ /u01/procesados/2016/03/ > > I did not think restart the volume, > What do you think could be the problem? > Would you happen to know how reproducible this problem is? Looking at the source code of coreutils, it does look like the error message mentioned in the earlier post is reported by a ln/link operation. dht uses links as part of a rename transaction and it is probably getting triggered due to that. Including dht maintainers Raghavendra and Shyam to take a look into this issue. Regards, Vijay _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users