Hi JF,
Yes Theoretical you are right. Here at INRA, we are in a specific unit with a constant growing. But the budget is dedicated annually.Hello Pierre, I see your point and I understand your arguments. But as it happens, the way you create your volumes has an impact on what you can do afterward, and how you can expand them. *It has nothing to do with GlusterFS itself, nor the developers or the community, it's all in your architectural choices in the beginning.* So I have build three clusters dedicated to different type of computations and when the new plateform will need storage I have to either build a new cluster or by a complete set of Four node or seven node depending of my budget. Our unit is not in a data-center profile for the computation, the evolution is a continue process. Yes, that's the reason we have a scratch zone dedicated for the computation and intermediate files, that is not secured all the user know that it's like your DD in your PC. And other volume secured with raid 5 and replica.It would be the same problem with many other distributed FS, or even some RAID setups. That is the kind of thing that you have to think about in the very beginning, and that will have consequences later. I think that the choice ar good for three years, the time for us to get the first DD problems. OK that's better.As for your 14-node cluster, if it's again a distributed+striped volume, then it's a 2 × 7 volume (distributed over 2 brick groups, each being a 7-disk stripe). To extend it, you will have to add 7 more bricks to create a new striped brick group, and your volume will be transformed into a 3 × 7 one. I will.I would advise you to read carefully the master documentation pertaining to volume creation and architecture. Maybe it will help you understand better the way things work and the impacts of your choices: https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_setting_volumes.md Many thanks for you and All. Sincerely. --
|
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users