Thank you very much for your answer David, just what I was after! Just some additional questions to make it clear to me.The mds do not need to be in odd numbers? They can be set up 1,2,3,4 aso. as needed? You made the basics clear to me so when I set up my first ceph fs I need as a start: 3 mons, 2 mds and 3 ods. (To be able to avoid single point of failure) Is there a clear ratio/relation/approximation between ods and mds? If I have, say, 100TB of disk for ods, do I neeed X GB disk for mds? About gluster, my machines are set up in a gluster cluster today, but the reason for thinking about ceph fs for these machines instead is that I have problems with replication that I have not been able to solve. Second of all is that we get indications from our organisation that data use will expand very quickly, and that is where I see that ceph fs will suit us. Easy expand as needed. Thanks to your description of gluster I will be able to reconfigure my gluster cluster and rsync to the mounted cluster. I have used rsync directly to the harddrive, and now this is obvious that it does not work (worked fine a a single distributed server, but not as a replica). I just haven't got this tip from anybody else. Thanks again! We will start using ceph fs, because this goes hand in hand with our future needs. Best regards Marcus On 04/05/17 06:30, David Turner wrote:
--
Marcus Pedersén System administrator
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com