Hi all, I am looking in to ceph and cephfs and in my head I am comparing with gluster. The way I have been running gluster over the years is either a replicated or replicated-distributed clusters. The small setup we have had has been a replicated cluster with one arbiter and two fileservers. These fileservers has been configured with RAID6 and that raid has been used as the brick. If disaster strikes and one fileserver burns up there is still the other fileserver and as it is RAIDed I can loose two disks on this machine before I start to loose data. .... thinking ceph and similar setup .... The idea is to have one "admin" node and two fileservers. The admin node will run mon, mgr and mds. The storage nodes will run mon, mgr, mds and 8x osd (8 disks), with replication = 2. The problem is that I can not get my head around how to think when disaster strikes. So one fileserver burns up, there is still the other fileserver and from my understanding the ceph system will start to replicate the files on the same fileserver and when this is done disks can be lost on this server without loosing data. But to be able to have this security on hardware it means that the ceph cluster can never be more then 50% full or this will not work, right? ... and it becomes similar if we have three fileservers, then the cluster can never be more then 2/3 full? I am not sure if I missunderstand how ceph works or that ceph works bad on smaller systems like this? I would appreciate if somebody with better knowledge would be able to help me out with this! Many thanks in advance!! Marcus --- När du skickar e-post till SLU så innebär detta att SLU behandlar dina personuppgifter. För att läsa mer om hur detta går till, klicka här <https://www.slu.se/om-slu/kontakta-slu/personuppgifter/> E-mailing SLU will result in SLU processing your personal data. For more information on how this is done, click here <https://www.slu.se/en/about-slu/contact-slu/personal-data/> ________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users