Hi for hardware, inktank have good guides here: http://www.inktank.com/resource/inktank-hardware-selection-guide/ http://www.inktank.com/resource/inktank-hardware-configuration-guide/ ceph works well with multiple osd daemon (1 osd by disk), so you should not use raid. (xfs is the recommended fs for osd daemons). you don't need disk spare too, juste enough disk space to handle a disk failure. (datas are replicated-rebalanced on other disks/osd in case of disk failure) ----- Mail original ----- De: "Adrian Sevcenco" <Adrian.Sevcenco@xxxxxxx> À: "ceph-users" <ceph-users@xxxxxxxxxxxxxx> Envoyé: Mercredi 4 Mars 2015 18:30:31 Objet: CEPH hardware recommendations and cluster design questions Hi! I seen the documentation http://ceph.com/docs/master/start/hardware-recommendations/ but those minimum requirements without some recommendations don't tell me much ... So, from what i seen for mon and mds any cheap 6 core 16+ gb ram amd would do ... what puzzles me is that "per daemon" construct ... Why would i need/require to have multiple daemons? with separate servers (3 mon + 1 mds - i understood that this is the requirement) i imagine that each will run a single type of daemon.. did i miss something? (beside that maybe is a relation between daemons and block devices and for each block device should be a daemon?) for mon and mds : would help the clients if these are on 10 GbE? for osd : i plan to use a 36 disk server as osd server (ZFS RAIDZ3 all disks + 2 ssds mirror for ZIL and L2ARC) - that would give me ~ 132 TB how much ram i would really need? (128 gb would be way to much i think) (that RAIDZ3 for 36 disks is just a thought - i have also choices like: 2 X 18 RAIDZ2 ; 34 disks RAIDZ3 + 2 hot spare) Regarding journal and scrubbing : by using ZFS i would think that i can safely not use the CEPH ones ... is this ok? Do you have some other advises and recommendations for me? (the read:writes ratios will be 10:1) Thank you!! Adrian _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com