-Atin
Sent from one plus one
On Aug 16, 2015 9:59 PM, "ousmane sanogo" <sanoousmane@xxxxxxxxx> wrote:
>
> Hello,
> after update my system and gluster i have this error when i create an instance
>
> [2015-08-16 16:03:31.853317] I [MSGID: 109036] [dht-common.c:7087:dht_log_new_layout_for_dir_selfheal] 0-vol_instances-dht: Setting layout of /a0620e02-6529-41d3-a586-e8fd2db73027 with [Subvol_name: vol_instances-replicate-0, Err: -1 , Start: 0 , Stop: 4294967295 , Hash: 1 ],
> [2015-08-16 16:03:31.876220] I [MSGID: 109066] [dht-rename.c:1410:dht_rename] 0-vol_instances-dht: renaming /a0620e02-6529-41d3-a586-e8fd2db73027/disk.info.tmp (hash=vol_instances-replicate-0/cache=vol_instances-replicate-0) => /a0620e02-6529-41d3-a586-e8fd2db73027/disk.info (hash=vol_instances-replicate-0/cache=vol_instances-replicate-0)
> [2015-08-16 16:03:31.898112] E [MSGID: 108008] [afr-transaction.c:1984:afr_transaction] 0-vol_instances-replicate-0: Failing TRUNCATE on gfid 84c65aa0-ad2f-484d-b10e-7f78227c6422: split-brain observed. [Erreur d'entrée/sortie]
> [2015-08-16 16:03:31.898162] W [fuse-bridge.c:690:fuse_truncate_cbk] 0-glusterfs-fuse: 1596: TRUNCATE() /locks/nova-3b373aaca5e2e22121f07dd25ecbc72cb4898964 => -1 (Erreur d'entrée/sortie)
This looks a split brain issue. Could you delete this file from one of the brick from back end and then trigger gluster volume heal vol_instances
>
> [2015-08-16 16:03:32.003440] I [MSGID: 109066] [dht-rename.c:1410:dht_rename] 0-vol_instances-dht: renaming /a0620e02-6529-41d3-a586-e8fd2db73027 (hash=vol_instances-replicate-0/cache=vol_instances-replicate-0) => /a0620e02-6529-41d3-a586-e8fd2db73027_del (hash=vol_instances-replicate-0/cache=<nul>)
>
>
>
> Status of volume: vol_instances
> Gluster process TCP Port RDMA Port Online Pid
> ------------------------------------------------------------------------------
> Brick 172.16.10.2:/var/vol-instances 49157 0 Y 7665
> Brick 172.16.10.3:/var/vol-instances 49160 0 Y 10161
> NFS Server on localhost N/A N/A N N/A
> Self-heal Daemon on localhost N/A N/A Y 7697
> NFS Server on kvm N/A N/A N N/A
> Self-heal Daemon on kvm N/A N/A Y 9956
> NFS Server on 172.16.10.3 N/A N/A N N/A
> Self-heal Daemon on 172.16.10.3 N/A N/A Y 10193
>
> Task Status of Volume vol_instances
> ------------------------------------------------------------------------------
> There are no active volume tasks
>
> glusterd -V
> glusterfs 3.7.3 built on Jul 28 2015 14:24:41
>
>
> How can i solve it ?
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users