Hi,
Yes, this has been reported before by Lindsay Mathieson and Kevin Lemonnier on this list.In your case, are you doing add-brick and changing the replica count (say from 2 -> 3) or are you adding
"replica-count" number of bricks every time?
-Krutika
On Sat, Nov 12, 2016 at 6:40 AM, ML Wong <wongmlb@xxxxxxxxx> wrote:
Have anyone encounter this behavior?Running 3.7.16 from centos-gluster37, on CentOS 7.2 with NFS-Ganesha 2.3.0. VMs are running fine without problems and with Sharding on. However, when i either do a "add-brick" or "remove-brick start force". VM files will then be corrupted, and the VM will not be able to boot anymore.So far, as i access files through regular NFS, all regular files, or directories seems to be accessible fine. I am not sure if this somehow relate to bug1318136, but any help will be appreciated. Or, m i missing any settings? Below is the vol info of gluster volume.Volume Name: nfsvol1Type: Distributed-ReplicateVolume ID: 06786467-4c8a-48ad-8b1f-346aa8342283 Status: StartedNumber of Bricks: 2 x 2 = 4Transport-type: tcpBricks:Brick1: stor4:/data/brick1/nfsvol1Brick2: stor5:/data/brick1/nfsvol1Brick3: stor1:/data/brick1/nfsvol1Brick4: stor2:/data/brick1/nfsvol1Options Reconfigured:features.shard-block-size: 64MBfeatures.shard: onganesha.enable: onfeatures.cache-invalidation: offnfs.disable: onperformance.readdir-ahead: onnfs-ganesha: enablecluster.enable-shared-storage: enablethanks,Melvin
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users