Hi, I just started using Gluster today to build a new fileserver, and so far I'm impressed with the ease of set-up and configuration.
However, my cluster appears to be working normally, yet no updates are made on the actual filesystem. I change files on one node, and nothing shows up on the other.
Since I'm new to this, I don't know what kind of troubleshooting steps to take yet, but I've executed a few commands on the gluster cli:
root@nfs1:/nfs# gluster volume profile nfspool info Brick: nfs1:/nfs ---------------- Cumulative Stats: %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 7.32 30.00 us 30.00 us 30.00 us 1 GETXATTR 18.54 38.00 us 25.00 us 51.00 us 2 STATFS 74.15 76.00 us 61.00 us 94.00 us 4 LOOKUP Duration: 7663 seconds Data Read: 0 bytes Data Written: 0 bytes Interval 0 Stats: %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 7.32 30.00 us 30.00 us 30.00 us 1 GETXATTR 18.54 38.00 us 25.00 us 51.00 us 2 STATFS 74.15 76.00 us 61.00 us 94.00 us 4 LOOKUP Duration: 7663 seconds Data Read: 0 bytes Data Written: 0 bytes Brick: nfs2:/nfs ---------------- Cumulative Stats: %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 7.11 31.00 us 31.00 us 31.00 us 1 GETXATTR 20.64 45.00 us 42.00 us 48.00 us 2 STATFS 72.25 78.75 us 64.00 us 91.00 us 4 LOOKUP Duration: 7661 seconds Data Read: 0 bytes Data Written: 0 bytes Interval 0 Stats: %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 7.11 31.00 us 31.00 us 31.00 us 1 GETXATTR 20.64 45.00 us 42.00 us 48.00 us 2 STATFS 72.25 78.75 us 64.00 us 91.00 us 4 LOOKUP Duration: 7661 seconds Data Read: 0 bytes Data Written: 0 bytes root@nfs1:/nfs# gluster volume status Status of volume: nfspool Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick nfs1:/nfs 49152 Y 2318 Brick nfs2:/nfs 49152 Y 1833 NFS Server on localhost 2049 Y 2882 Self-heal Daemon on localhost N/A Y 2335 NFS Server on nfs2 2049 Y 1985 Self-heal Daemon on nfs2 N/A Y 1953 There are no active volume tasks
gluster> peer probe nfs2 peer probe: success: host nfs2 port 24007 already in peer list gluster> peer status Number of Peers: 1
Hostname: nfs2 Uuid: ab13df7b-d7e7-46c9-8c43-c347b68a2a08 State: Peer in Cluster (Connected) gluster> peer status nfs2 Usage: peer status gluster> peer status Number of Peers: 1
Hostname: nfs2 Uuid: ab13df7b-d7e7-46c9-8c43-c347b68a2a08 State: Peer in Cluster (Connected) gluster> volume list nfspool gluster> volume sync Usage: volume sync <HOSTNAME> [all|<VOLNAME>] gluster> volume sync nfs2 all Sync volume may make data inaccessible while the sync is in progress. Do you want to continue? (y/n) y volume sync: success
To me, this looks like everything is working and normal. I can probe each peer from each other peer, and pings work as well. This tells me there shouldn't be any split-brain condition either. Yet:
root@nfs1:/nfs# ls file1 file12 file15 file18 file20 file23 file3 file6 file9 file10 file13 file16 file19 file21 file24 file4 file7 lost+found file11 file14 file17 file2 file22 file25 file5 file8
and on the other node:
root@nfs2:/nfs# ls foo lost+found
|