In that case could you share the ganesha-gfapi logs?
-KrutikaOn Sun, Mar 19, 2017 at 12:13 PM, Mahdi Adnan <mahdi.adnan@xxxxxxxxxxx> wrote:
I have two volumes, one is mounted using libgfapi for ovirt mount, the other one is exported via NFS-Ganesha for VMWare which is the one im testing now.
--
Respectfully
Mahdi A. Mahdi
On Sat, Mar 18, 2017 at 10:36 PM, Mahdi Adnan <mahdi.adnan@xxxxxxxxxxx> wrote:
Kindly, check the attached new log file, i dont know if it's helpful or not but, i couldn't find the log with the name you just described.
No. Are you using FUSE or libgfapi for accessing the volume? Or is it NFS?
-Krutika
--
Respectfully
Mahdi A. Mahdi
mnt-disk11-vmware2.log seems like a brick log. Could you attach the fuse mount logs? It should be right under /var/log/glusterfs/ directory
named after the mount point name, only hyphenated.
-Krutika
On Sat, Mar 18, 2017 at 7:27 PM, Mahdi Adnan <mahdi.adnan@xxxxxxxxxxx> wrote:
Hello Krutika,
Kindly, check the attached logs.
--
Respectfully
Mahdi A. Mahdi
From: Krutika Dhananjay <kdhananj@xxxxxxxxxx>
Sent: Saturday, March 18, 2017 3:29:03 PM
To: Mahdi Adnan
Cc: gluster-users@xxxxxxxxxxx
Subject: Re: Gluster 3.8.10 rebalance VMs corruption-KrutikaHi Mahdi,Could you attach mount, brick and rebalance logs?
On Sat, Mar 18, 2017 at 12:14 AM, Mahdi Adnan <mahdi.adnan@xxxxxxxxxxx> wrote:
Hi,
I have upgraded to Gluster 3.8.10 today and ran the add-brick procedure in a volume contains few VMs.After the completion of rebalance, i have rebooted the VMs, some of ran just fine, and others just crashed.Windows boot to recovery mode and Linux throw xfs errors and does not boot.I ran the test again and it happened just as the first one, but i have noticed only VMs doing disk IOs are affected by this bug.The VMs in power off mode started fine and even md5 of the disk file did not change after the rebalance.
anyone else can confirm this ?
Volume info:Volume Name: vmware2Type: Distributed-ReplicateVolume ID: 02328d46-a285-4533-aa3a-fb9bfeb688bf Status: StartedSnapshot Count: 0Number of Bricks: 22 x 2 = 44Transport-type: tcpBricks:Brick1: gluster01:/mnt/disk1/vmware2Brick2: gluster03:/mnt/disk1/vmware2Brick3: gluster02:/mnt/disk1/vmware2Brick4: gluster04:/mnt/disk1/vmware2Brick5: gluster01:/mnt/disk2/vmware2Brick6: gluster03:/mnt/disk2/vmware2Brick7: gluster02:/mnt/disk2/vmware2Brick8: gluster04:/mnt/disk2/vmware2Brick9: gluster01:/mnt/disk3/vmware2Brick10: gluster03:/mnt/disk3/vmware2Brick11: gluster02:/mnt/disk3/vmware2Brick12: gluster04:/mnt/disk3/vmware2Brick13: gluster01:/mnt/disk4/vmware2Brick14: gluster03:/mnt/disk4/vmware2Brick15: gluster02:/mnt/disk4/vmware2Brick16: gluster04:/mnt/disk4/vmware2Brick17: gluster01:/mnt/disk5/vmware2Brick18: gluster03:/mnt/disk5/vmware2Brick19: gluster02:/mnt/disk5/vmware2Brick20: gluster04:/mnt/disk5/vmware2Brick21: gluster01:/mnt/disk6/vmware2Brick22: gluster03:/mnt/disk6/vmware2Brick23: gluster02:/mnt/disk6/vmware2Brick24: gluster04:/mnt/disk6/vmware2Brick25: gluster01:/mnt/disk7/vmware2Brick26: gluster03:/mnt/disk7/vmware2Brick27: gluster02:/mnt/disk7/vmware2Brick28: gluster04:/mnt/disk7/vmware2Brick29: gluster01:/mnt/disk8/vmware2Brick30: gluster03:/mnt/disk8/vmware2Brick31: gluster02:/mnt/disk8/vmware2Brick32: gluster04:/mnt/disk8/vmware2Brick33: gluster01:/mnt/disk9/vmware2Brick34: gluster03:/mnt/disk9/vmware2Brick35: gluster02:/mnt/disk9/vmware2Brick36: gluster04:/mnt/disk9/vmware2Brick37: gluster01:/mnt/disk10/vmware2Brick38: gluster03:/mnt/disk10/vmware2Brick39: gluster02:/mnt/disk10/vmware2Brick40: gluster04:/mnt/disk10/vmware2Brick41: gluster01:/mnt/disk11/vmware2Brick42: gluster03:/mnt/disk11/vmware2Brick43: gluster02:/mnt/disk11/vmware2Brick44: gluster04:/mnt/disk11/vmware2Options Reconfigured:cluster.server-quorum-type: servernfs.disable: onperformance.readdir-ahead: ontransport.address-family: inetperformance.quick-read: offperformance.read-ahead: offperformance.io-cache: offperformance.stat-prefetch: offcluster.eager-lock: enablenetwork.remote-dio: enablefeatures.shard: oncluster.data-self-heal-algorithm: full features.cache-invalidation: onganesha.enable: onfeatures.shard-block-size: 256MBclient.event-threads: 2server.event-threads: 2cluster.favorite-child-policy: sizestorage.build-pgfid: offnetwork.ping-timeout: 5cluster.enable-shared-storage: enablenfs-ganesha: enablecluster.server-quorum-ratio: 51%
Adding bricks:gluster volume add-brick vmware2 replica 2 gluster01:/mnt/disk11/vmware2 gluster03:/mnt/disk11/vmware2 gluster02:/mnt/disk11/vmware2 gluster04:/mnt/disk11/vmware2
starting fix layout:gluster volume rebalance vmware2 fix-layout start
Starting rebalance:gluster volume rebalance vmware2 start
--
Respectfully
Mahdi A. Mahdi
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users