Heya, I have one replicate gluster volume made of two bricks. Both servers are live and connected right now but to my surprise, they are reporting different Used/Available statistics. I don't seem to be running into any issues on the client ends but I'd like to know what's the safest way to sync all the files so that both volumes have the same content, without causing downtime to my file servers (preferably). I tried doing "ls -lahR * .*" on one of the client but it didn't get everything copied over. Thanks! Server 1 (where clients connect in fstab): Filesystem 1K-blocks Used Available Use% Mounted on /dev/md0 412847280 7590928 384284936 2% /ebs_raid gluster> peer status Number of Peers: 1 Hostname: 1-gfs Uuid: b3fbe1b0-d2f4-4ae8-975a-921643e56894 State: Peer in Cluster (Connected) gluster> volume info Volume Name: 1-gfs-edgar Type: Replicate Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: 1-gfs:/ebs_raid Brick2: 2-gfs:/ebs_raid Options Reconfigured: performance.quick-read: on performance.cache-max-file-size: 512KB performance.cache-size: 512MB performance.stat-prefetch: on network.frame-timeout: 60 Server 2: Filesystem 1K-blocks Used Available Use% Mounted on /dev/md0 412847280 4080260 387795604 2% /ebs_raid gluster> peer status Number of Peers: 1 Hostname: 2-gfs Uuid: 61915cd1-2735-4c0e-ba2f-829aa6bc4a9a State: Peer in Cluster (Connected) gluster> volume info Volume Name: 1-gfs-edgar Type: Replicate Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: 1-gfs:/ebs_raid Brick2: 2-gfs:/ebs_raid Options Reconfigured: performance.quick-read: on performance.cache-max-file-size: 512KB performance.cache-size: 512MB performance.stat-prefetch: on network.frame-timeout: 60 -- Pierre-Luc Brunet