Hello, I would ask to you if this configuration is valid: 32 nodes cluster with CentOS 6.6 kernel 2.6.32-504.8.1.el6.x86_64 glusterf version is 3.7.8 I have infiniband working at 40 GBs and on each node I have configured two bricks: one on a 1 TB disk with a xfs filesystem dedicated to gluster the other with a 1 TB disk with a ext4 filesystem also dedicated to gluster I have configured only one big replicated volume made of 64 bricks This is the volume configuration in summary: Volume Name: scratch Type: Distribute Volume ID: fc6f18b6-a06c-4fdf-ac08-23e9b4f8053e Status: Started Number of Bricks: 64 Transport-type: tcp,rdma Bricks: Brick1: ib-wn001:/bricks/brick1/gscratch0 Brick2: ib-wn002:/bricks/brick1/gscratch0 .... Brick31: ib-wn032:/bricks/brick1/gscratch0 Brick32: ib-wn030:/bricks/brick1/gscratch0 Brick33: ib-wn001:/bricks/brick2/gsp2 .... Brick64: ib-wn032:/bricks/brick2/gsp2 Options Reconfigured: features.inode-quota: off features.quota: off features.scrub-freq: daily features.scrub: Inactive features.bitrot: off config.transport: tcp,rdma performance.readdir-ahead: on nfs.disable: true It seems working well, but the output of df -h |grep scratch gives this output: ib-wn001:/scratch.rdma 35T 13T 22T 36% /scratch But I expect a volume of 64 TB Can you help me to understand? Volume is currently used but the size is different by that expected... Thank you to all Fedele Stabile _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users