Good morning Atin,
Thanks for the reply.
I believe that log file is "rhev-data-center-mnt-glusterSD-ovirt-hyp-01.reis.com:_data.log", please correct me if I'm wrong. However, it happens to be empty. See below:
ls -lah /var/log/glusterfs/|grep data
-rw-------. 1 root root 0 Jun 13 17:09 glfsheal-data.log
-rw-------. 1 root root 34K Jun 4 03:06 glfsheal-data.log-20170604.gz
-rw-------. 1 root root 563K Jun 7 16:01 glfsheal-data.log-20170613
-rw-------. 1 root root 0 Jun 13 17:09 rhev-data-center-mnt-glusterSD-ovirt-hyp-01.reis.com:_data.log
-rw-------. 1 root root 61K Jun 4 03:08 rhev-data-center-mnt-glusterSD-ovirt-hyp-01.reis.com:_data.log-20170604.gz
-rw-------. 1 root root 164K Jun 8 08:58 rhev-data-center-mnt-glusterSD-ovirt-hyp-01.reis.com:_data.log-20170613
-rw-------. 1 root root 0 Jun 4 03:08 rhev-data-center-mnt-glusterSD-ovirt-hyp-01.reis.com:_engine.log
-rw-------. 1 root root 371 Jun 28 03:30 rhev-data-center-mnt-glusterSD-ovirt-hyp-01.reis.com:engine.log
-rw-------. 1 root root 16K May 31 14:12 rhev-data-center-mnt-glusterSD-ovirt-hyp-01.reis.com:_engine.log-20170604
-rw-------. 1 root root 4.8K Jun 4 03:08 rhev-data-center-mnt-glusterSD-ovirt-hyp-01.reis.com:engine.log-20170604.gz
-rw-------. 1 root root 34K Jun 13 17:09 rhev-data-center-mnt-glusterSD-ovirt-hyp-01.reis.com:engine.log-20170613.gz
-rw-------. 1 root root 21K Jun 18 03:10 rhev-data-center-mnt-glusterSD-ovirt-hyp-01.reis.com:engine.log-20170618.gz
-rw-------. 1 root root 32K Jun 25 03:26 rhev-data-center-mnt-glusterSD-ovirt-hyp-01.reis.com:engine.log-20170625
[root@ovirt-hyp-01 ~]# cat /var/log/glusterfs/rhev-data-center-mnt-glusterSD-ovirt-hyp-01.reis.com:_data.log
[root@ovirt-hyp-01 ~]#
Please let me know what other information I can provide.
Thank you,
Joel
On Wed, Jun 28, 2017 at 12:08 AM, Atin Mukherjee <amukherj@xxxxxxxxxx> wrote:
The mount log file of the volume would help in debugging the actual cause.On Tue, Jun 27, 2017 at 6:33 PM, Joel Diaz <mrjoeldiaz@xxxxxxxxx> wrote:______________________________Good morning Gluster users,I'm very new to the Gluster file system. My apologies if this is not the correct way to seek assistance. However, I would appreciate some insight into understanding the issue I have.I have three nodes running two volumes, engine and data. The third node is the arbiter on both volumes. Both volumes were operation fine but one of the volumes, data, no longer mounts.Please see below:gluster volume info allVolume Name: dataType: ReplicateVolume ID: 1d6bb110-9be4-4630-ae91-36ec1cf6cc02 Status: StartedSnapshot Count: 0Number of Bricks: 1 x (2 + 1) = 3Transport-type: tcpBricks:Brick1: 192.168.170.141:/gluster_bricks/data/data Brick2: 192.168.170.143:/gluster_bricks/data/data Brick3: 192.168.170.147:/gluster_bricks/data/data (arbiter) Options Reconfigured:nfs.disable: onperformance.readdir-ahead: ontransport.address-family: inetperformance.quick-read: offperformance.read-ahead: offperformance.io-cache: offperformance.stat-prefetch: offperformance.low-prio-threads: 32network.remote-dio: offcluster.eager-lock: enablecluster.quorum-type: autocluster.server-quorum-type: servercluster.data-self-heal-algorithm: full cluster.locking-scheme: granularcluster.shd-max-threads: 8cluster.shd-wait-qlength: 10000features.shard: onuser.cifs: offstorage.owner-uid: 36storage.owner-gid: 36network.ping-timeout: 30performance.strict-o-direct: oncluster.granular-entry-heal: enableVolume Name: engineType: ReplicateVolume ID: b160f0b2-8bd3-4ff2-a07c-134cab1519dd Status: StartedSnapshot Count: 0Number of Bricks: 1 x (2 + 1) = 3Transport-type: tcpBricks:Brick1: 192.168.170.141:/gluster_bricks/engine/engine Brick2: 192.168.170.143:/gluster_bricks/engine/engine Brick3: 192.168.170.147:/gluster_bricks/engine/engine (arbiter) Options Reconfigured:nfs.disable: onperformance.readdir-ahead: ontransport.address-family: inetperformance.quick-read: offperformance.read-ahead: offperformance.io-cache: offperformance.stat-prefetch: offperformance.low-prio-threads: 32network.remote-dio: offcluster.eager-lock: enablecluster.quorum-type: autocluster.server-quorum-type: servercluster.data-self-heal-algorithm: full cluster.locking-scheme: granularcluster.shd-max-threads: 8cluster.shd-wait-qlength: 10000features.shard: onuser.cifs: offstorage.owner-uid: 36storage.owner-gid: 36network.ping-timeout: 30performance.strict-o-direct: oncluster.granular-entry-heal: enabledf -hFilesystem Size Used Avail Use% Mounted on/dev/mapper/centos_ovirt--hyp--01-root 50G 3.9G 47G 8% / devtmpfs 7.7G 0 7.7G 0% /devtmpfs 7.8G 0 7.8G 0% /dev/shmtmpfs 7.8G 8.7M 7.7G 1% /runtmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup/dev/mapper/centos_ovirt--hyp--01-home 61G 33M 61G 1% /home /dev/mapper/gluster_vg_sdb-gluster_lv_engine 50G 8.1G 42G 17% /gluster_bricks/engine /dev/sda1 497M 173M 325M 35% /boot/dev/mapper/gluster_vg_sdb-gluster_lv_data 730G 157G 574G 22% /gluster_bricks/data tmpfs 1.6G 0 1.6G 0% /run/user/0ovirt-hyp-01.reis.com:engine 50G 8.1G 42G 17% /rhev/data-center/mnt/glusterSD/ovirt-hyp-01.reis.com:engine gluster volume status dataStatus of volume: dataGluster process TCP Port RDMA Port Online Pid------------------------------------------------------------ ------------------ Brick 192.168.170.141:/gluster_bricks/data/ data 49157 0 Y 11967Brick 192.168.170.143:/gluster_bricks/data/ data 49157 0 Y 2901Brick 192.168.170.147:/gluster_bricks/data/ data 49158 0 Y 2626Self-heal Daemon on localhost N/A N/A Y 16211Self-heal Daemon on 192.168.170.147 N/A N/A Y 3402Self-heal Daemon on 192.168.170.143 N/A N/A Y 20254Task Status of Volume data------------------------------------------------------------ ------------------ There are no active volume tasksgluster peer statusNumber of Peers: 2Hostname: 192.168.170.143Uuid: b2b30d05-cf91-4567-92fd-022575e082f5 State: Peer in Cluster (Connected)Other names:10.0.0.2Hostname: 192.168.170.147Uuid: 4e50acc4-f3cb-422d-b499-fb5796a53529 State: Peer in Cluster (Connected)Other names:10.0.0.3Any assistance in understanding how and why the volume no longer mounts and a possible resolution would be greatly appreciated.Thank you,Joel_________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users