Thanks!!!
Daniele
2017-02-21 13:30 GMT+01:00 Ravishankar N <ravishankar@xxxxxxxxxx>:
On 02/21/2017 05:17 PM, Nithya Balachandran wrote:
Hi,
Ideally, both bricks in a replica set should be of the same size.
Ravi, can you confirm?
Yes, correct.
-Ravi
Regards,Nithya
On 21 February 2017 at 16:05, Daniele Antolini <lantuin@xxxxxxxxx> wrote:
Hi Serkan,
thanks a lot for the answer.
So, if you are correct, in a distributed with replica environment the best practice is to pair nodes with the smallest size together?
For example:
node1 1 GBnode2 10 GBnode3 4 GBnode4 8 GBnode5 15 GBnode6 7 GB
So:
node1 with node3 (smallest is 1 GB)node4 with node6 (smallest is 7 GB)node2 with node5 (smallest is 10 GB)
The command to launch:
gluster volume create gv0 replica 2 node1:/opt/data/gv0 node3:/opt/data/gv0 node4:/opt/data/gv0 node6:/opt/data/gv0 node2:/opt/data/gv0 node5:/opt/data/gv0
Right? In this way I should have 18 GB of free space on the mounted volume (1 GB + 7 GB + 10 GB)
2017-02-21 11:30 GMT+01:00 Serkan Çoban <cobanserkan@xxxxxxxxx>:
I think, gluster1 and gluster2 became a replica pair. Smallest size
between them is affective size (1GB)
Same for gluster3 and gluster4 (3GB). Total 4GB space available. This
is just a guest though..
> ______________________________
On Tue, Feb 21, 2017 at 1:18 PM, Daniele Antolini <lantuin@xxxxxxxxx> wrote:
> Hi all,
>
> first of all, nice to meet you. I'm new here and I'm subscribing to do a
> very simple question.
>
> I don't understand completely how, in a distributed with replica
> environment, heterogeneous bricks are involved.
>
> I've just done a test with four bricks:
>
> gluster1 1 GB
> gluster2 2 GB
> gluster3 5 GB
> gluster4 3 GB
>
> Each partition is mounted locally at /opt/data
>
> I've created a gluster volume with:
>
> gluster volume create gv0 replica 2 gluster1:/opt/data/gv0
> gluster2:/opt/data/gv0 gluster3:/opt/data/gv0 gluster4:/opt/data/gv0
>
> and then mounted on a client:
>
> testgfs1:/gv0 4,0G 65M 4,0G 2% /mnt/test
>
> I see 4 GB of free space but I cannot understand how this space has been
> allocated.
> Can please someone explain to me how this can happened?
>
> Thanks a lot
>
> Daniele
>
>
>
>
_________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> http://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/ mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users