Hi Ravishankar - I figured out the issue. The 4th node was showing "online" under 'gluster peer status' as well as 'gluster volume status' - but 'gluster volume status' wasn't showing a TCP port for that 4th node. When I opened 49152 in firewalld and then re-copied
the ISO, the hash didn't change.
So, now I guess the question would be, why would having one malfunctioning node override 3 functioning nodes and cause a file to be altered? I wasn't performing the initial copy onto the malfunctioning node.
matt@docker1:~$ sudo glusterfs --version
glusterfs 6.3
matt@docker1:~$ sudo gluster volume info
Volume Name: swarm-vols
Type: Replicate
Volume ID: 0b51e6b3-786e-454e-8a16-89b47e94828a
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 4 = 4
Transport-type: tcp
Bricks:
Brick1: docker1:/gluster/data
Brick2: docker2:/gluster/data
Brick3: docker3:/gluster/data
Brick4: docker4:/gluster/data
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
auth.allow: 10.5.22.*From: Ravishankar N <ravishankar@xxxxxxxxxx>
Sent: Saturday, July 27, 2019 2:04 AM To: Matthew Evans <runmatt@xxxxxxxx>; gluster-users@xxxxxxxxxxx <gluster-users@xxxxxxxxxxx> Subject: Re: GlusterFS Changing Hash of Large Files?
On 26/07/19 6:50 PM, Matthew Evans wrote:
Can you provide the below details? - glusterfs version -`gluster volume info`
|
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users