Re: WG: Strange issu concerning glusterfs 3.5.1 on centos 6.5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



So my logs. I disable ssl meanwhile but it is the same situation. No replication!?



EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: mueller@xxxxxxxxxxxxxxx
Internet: www.tropenklinik.de





-----Ursprüngliche Nachricht-----
Von: Krishnan Parthasarathi [mailto:kparthas@xxxxxxxxxx] 
Gesendet: Mittwoch, 30. Juli 2014 08:56
An: mueller@xxxxxxxxxxxxxxx
Cc: gluster-users@xxxxxxxxxxx; gluster-devel-bounces@xxxxxxxxxxx
Betreff: Re:  WG: Strange issu concerning glusterfs 3.5.1 on centos 6.5

Could you attach the entire mount and glustershd log files to this thread?

~KP

----- Original Message -----
> NO ONE!??
> This is an entry of my glustershd.log:
> [2014-07-30 06:40:59.294334] W
> [client-handshake.c:1846:client_dump_version_cbk] 0-smbbackup-client-1:
> received RPC status error
> [2014-07-30 06:40:59.294352] I [client.c:2229:client_rpc_notify]
> 0-smbbackup-client-1: disconnected from 172.17.2.31:49152. Client 
> process will keep trying to connect to glusterd until brick's port is 
> available
> 
> 
> This is from mnt-sicherung.log:
> [2014-07-30 06:40:38.259850] E [socket.c:2820:socket_connect]
> 1-smbbackup-client-0: connection attempt on 172.17.2.30:24007 failed, 
> (Connection timed out) [2014-07-30 06:40:41.275120] I 
> [rpc-clnt.c:1729:rpc_clnt_reconfig]
> 1-smbbackup-client-0: changing port to 49152 (from 0)
> 
> [root@centclust1 sicherung]# gluster --remote-host=centclust1  peer 
> status Number of Peers: 1
> 
> Hostname: centclust2
> Uuid: 4f15e9bd-9b5a-435b-83d2-4ed202c66b11
> State: Peer in Cluster (Connected)
> 
> [root@centclust1 sicherung]# gluster --remote-host=centclust2  peer 
> status Number of Peers: 1
> 
> Hostname: 172.17.2.30
> Uuid: 99fe6a2c-df7e-4475-a7bc-a35abba620fb
> State: Peer in Cluster (Connected)
> 
> [root@centclust1 ssl]# ps aux | grep gluster
> root     13655  0.0  0.0 413848 16872 ?        Ssl  08:10   0:00
> /usr/sbin/glusterd --pid-file=/var/run/glusterd.pid
> root     13958  0.0  0.0 12139920 44812 ?      Ssl  08:11   0:00
> /usr/sbin/glusterfsd -s centclust1.tplk.loc --volfile-id 
> smbbackup.centclust1.tplk.loc.sicherung-bu -p 
> /var/lib/glusterd/vols/smbbackup/run/centclust1.tplk.loc-sicherung-bu.
> pid -S /var/run/4c65260e12e2d3a9a5549446f491f383.socket --brick-name 
> /sicherung/bu -l /var/log/glusterfs/bricks/sicherung-bu.log 
> --xlator-option 
> *-posix.glusterd-uuid=99fe6a2c-df7e-4475-a7bc-a35abba620fb 
> --brick-port
> 49152 --xlator-option smbbackup-server.listen-port=49152
> root     13972  0.0  0.0 815748 58252 ?        Ssl  08:11   0:00
> /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p 
> /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S 
> /var/run/ee6f37fc79b9cb1968eca387930b39fb.socket
> root     13976  0.0  0.0 831160 29492 ?        Ssl  08:11   0:00
> /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p 
> /var/lib/glusterd/glustershd/run/glustershd.pid -l 
> /var/log/glusterfs/glustershd.log -S 
> /var/run/aa970d146eb23ba7124e6c4511879850.socket --xlator-option 
> *replicate*.node-uuid=99fe6a2c-df7e-4475-a7bc-a35abba620fb
> root     15781  0.0  0.0 105308   932 pts/1    S+   08:47   0:00 grep
> gluster
> root     29283  0.0  0.0 451116 56812 ?        Ssl  Jul29   0:21
> /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p 
> /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S 
> /var/run/a7fcb1d1d3a769d28df80b85ae5d13c4.socket
> root     29287  0.0  0.0 335432 25848 ?        Ssl  Jul29   0:21
> /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p 
> /var/lib/glusterd/glustershd/run/glustershd.pid -l 
> /var/log/glusterfs/glustershd.log -S 
> /var/run/833e60f976365c2a307f92fb233942a2.socket --xlator-option 
> *replicate*.node-uuid=64b1a7eb-2df3-47bd-9379-39c29e5a001a
> root     31698  0.0  0.0 1438392 57952 ?       Ssl  Jul29   0:12
> /usr/sbin/glusterfs --acl --volfile-server=centclust1.tplk.loc
> --volfile-id=/smbbackup /mnt/sicherung
> 
> [root@centclust2 glusterfs]#  ps aux | grep gluster
> root      1561  0.0  0.0 1481492 60152 ?       Ssl  Jul29   0:12
> /usr/sbin/glusterfs --acl --volfile-server=centclust2.tplk.loc
> --volfile-id=/smbbackup /mnt/sicherung
> root     15656  0.0  0.0 413848 16832 ?        Ssl  08:11   0:01
> /usr/sbin/glusterd --pid-file=/var/run/glusterd.pid
> root     15942  0.0  0.0 12508704 43860 ?      Ssl  08:11   0:00
> /usr/sbin/glusterfsd -s centclust2.tplk.loc --volfile-id 
> smbbackup.centclust2.tplk.loc.sicherung-bu -p 
> /var/lib/glusterd/vols/smbbackup/run/centclust2.tplk.loc-sicherung-bu.
> pid -S /var/run/40a554af3860eddd5794b524576d0520.socket --brick-name 
> /sicherung/bu -l /var/log/glusterfs/bricks/sicherung-bu.log 
> --xlator-option
> *-posix.glusterd-uuid=4f15e9bd-9b5a-435b-83d2-4ed202c66b11 
> --brick-port
> 49152 --xlator-option smbbackup-server.listen-port=49152
> root     15956  0.0  0.0 825992 57496 ?        Ssl  08:11   0:00
> /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p 
> /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S 
> /var/run/602d1d8ba7b80ded2b70305ed7417cf5.socket
> root     15960  0.0  0.0 841404 26760 ?        Ssl  08:11   0:00
> /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p 
> /var/lib/glusterd/glustershd/run/glustershd.pid -l 
> /var/log/glusterfs/glustershd.log -S 
> /var/run/504d01c7f7df8b8306951cc2aaeaf52c.socket --xlator-option
> *replicate*.node-uuid=4f15e9bd-9b5a-435b-83d2-4ed202c66b11
> root     17728  0.0  0.0 105312   936 pts/0    S+   08:48   0:00 grep
> gluster
> root     32363  0.0  0.0 451100 55584 ?        Ssl  Jul29   0:21
> /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p 
> /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S 
> /var/run/73054288d1cadfb87b4b9827bd205c7b.socket
> root     32370  0.0  0.0 335432 26220 ?        Ssl  Jul29   0:21
> /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p 
> /var/lib/glusterd/glustershd/run/glustershd.pid -l 
> /var/log/glusterfs/glustershd.log -S 
> /var/run/de1427ce373c792c76c38b12c106f029.socket --xlator-option
> *replicate*.node-uuid=83e6d78c-0119-4537-8922-b3e731718864
> 
> 
> 
> 
> Leitung EDV
> Tropenklinik Paul-Lechler-Krankenhaus
> Paul-Lechler-Str. 24
> 72076 Tübingen
> Tel.: 07071/206-463, Fax: 07071/206-499
> eMail: mueller@xxxxxxxxxxxxxxx
> Internet: www.tropenklinik.de
> 
> 
> 
> -----Ursprüngliche Nachricht-----
> Von: Daniel Müller [mailto:mueller@xxxxxxxxxxxxxxx]
> Gesendet: Dienstag, 29. Juli 2014 16:02
> An: 'gluster-users@xxxxxxxxxxx'
> Betreff: Strange issu concerning glusterfs 3.5.1 on centos 6.5
> 
> Dear all,
> 
> there is a strange issue centos6.5 and glusterfs 3.5.1:
> 
>  glusterd -V
> glusterfs 3.5.1 built on Jun 24 2014 15:09:41 Repository revision: 
> git://git.gluster.com/glusterfs.git
> Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/> 
> GlusterFS comes with ABSOLUTELY NO WARRANTY.
> It is licensed to you under your choice of the GNU Lesser General 
> Public License, version 3 or any later version (LGPLv3 or later), or 
> the GNU General Public License, version 2 (GPLv2), in all cases as 
> published by the Free Software Foundation
> 
> I try to set up a replicated 2 brick vol on two centos 6.5 server.
> I can probe well and my nodes are reporting no errors:
>  
> [root@centclust1 mnt]# gluster peer status Number of Peers: 1
> 
> Hostname: centclust2
> Uuid: 4f15e9bd-9b5a-435b-83d2-4ed202c66b11
> State: Peer in Cluster (Connected)
> 
> [root@centclust2 sicherung]# gluster peer status Number of Peers: 1
> 
> Hostname: 172.17.2.30
> Uuid: 99fe6a2c-df7e-4475-a7bc-a35abba620fb
> State: Peer in Cluster (Connected)
> 
> Now I set up a replicating VOl on an XFS-Disk: /dev/sdb1 on /sicherung 
> type xfs (rw)
> 
> gluster volume create smbbackup replica 2 transport tcp 
> centclust1.tplk.loc:/sicherung/bu  centclust2.tplk.loc:/sicherung/bu
> 
> gluster volume smbbackup status reports ok:
> 
> [root@centclust1 mnt]# gluster volume status smbbackup Status of 
> volume: smbbackup
> Gluster process                                         Port    Online  Pid
> ----------------------------------------------------------------------
> ------
> --
> Brick centclust1.tplk.loc:/sicherung/bu                 49152   Y
> 31969
> Brick centclust2.tplk.loc:/sicherung/bu                 49152   Y       2124
> NFS Server on localhost                                 2049    Y
> 31983
> Self-heal Daemon on localhost                           N/A     Y
> 31987
> NFS Server on centclust2                                2049    Y       2138
> Self-heal Daemon on centclust2                          N/A     Y       2142
> 
> Task Status of Volume smbbackup
> ----------------------------------------------------------------------
> ------
> --
> There are no active volume tasks
> 
> [root@centclust2 sicherung]# gluster volume status smbbackup Status of 
> volume: smbbackup
> Gluster process                                         Port    Online  Pid
> ----------------------------------------------------------------------
> ------
> --
> Brick centclust1.tplk.loc:/sicherung/bu                 49152   Y
> 31969
> Brick centclust2.tplk.loc:/sicherung/bu                 49152   Y       2124
> NFS Server on localhost                                 2049    Y       2138
> Self-heal Daemon on localhost                           N/A     Y       2142
> NFS Server on 172.17.2.30                               2049    Y
> 31983
> Self-heal Daemon on 172.17.2.30                         N/A     Y
> 31987
> 
> Task Status of Volume smbbackup
> ----------------------------------------------------------------------
> ------
> --
> There are no active volume tasks
> 
> I mounted the vol on both servers with:
> 
> mount -t glusterfs centclust1.tplk.loc:/smbbackup  /mnt/sicherung -o 
> acl mount -t glusterfs centclust2.tplk.loc:/smbbackup  /mnt/sicherung 
> -o acl
> 
> But when I write in /mnt/sicherung the files are not replicated to the 
> other node in anyway!??
> 
> They rest on the local server in /mnt/sicherung and /sicherung/bu On 
> each node separate:#
> [root@centclust1 sicherung]# pwd
> /mnt/sicherung
> 
> [root@centclust1 sicherung]# touch test.txt
> [root@centclust1 sicherung]# ls
> test.txt
> [root@centclust2 sicherung]# pwd
> /mnt/sicherung
> [root@centclust2 sicherung]# ls
> more.txt
> [root@centclust1 sicherung]# ls -la /sicherung/bu insgesamt 0 
> drwxr-xr-x.  3 root root  38 29. Jul 15:56 .
> drwxr-xr-x.  3 root root  15 29. Jul 14:31 ..
> drw-------. 15 root root 142 29. Jul 15:56 .glusterfs
> -rw-r--r--.  2 root root   0 29. Jul 15:56 test.txt
> [root@centclust2 sicherung]# ls -la /sicherung/bu insgesamt 0 
> drwxr-xr-x. 3 root root 38 29. Jul 15:32 .
> drwxr-xr-x. 3 root root 15 29. Jul 14:31 ..
> drw-------. 7 root root 70 29. Jul 15:32 .glusterfs -rw-r--r--. 2 root 
> root  0 29. Jul 15:32 more.txt
> 
> 
> 
> Greetings
> Daniel
> 
> 
> 
> EDV Daniel Müller
> 
> Leitung EDV
> Tropenklinik Paul-Lechler-Krankenhaus
> Paul-Lechler-Str. 24
> 72076 Tübingen
> Tel.: 07071/206-463, Fax: 07071/206-499
> eMail: mueller@xxxxxxxxxxxxxxx
> Internet: www.tropenklinik.de
> 
> 
> 
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 

Attachment: glustershd.log
Description: Binary data

Attachment: nfs.log
Description: Binary data

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux