Hello, What is your expected workload? VMs, primary storage, backup, objects storage, ...? How many disks do you plan to put in each OSD node? How many CPU cores? How many RAM per nodes? Ceph access protocol(s): CephFS, RBD or objects? How do you plan to give access to the storage to you client? NFS, SMB, CephFS, ...? Replicated pools or EC pools? If EC, k and m factors? What OS (for ceph nodes and clients)? Recommandations: - For your information, Bluestore is not like Filestore, no need to have journal SSD. It's recommended for Bluestore to use the same disk for both WAL/RocksDB and datas. - For production, it's recommended to have dedicated MON/MGR nodes. - You may also need dedicated MDS nodes, depending the CEPH access protocol(s) you choose. - If you need commercial support afterward, you should see with a Redhat representative. Samsung 850 pro is consumer grade, not great. > Le 18 juil. 2018 à 19:16, Satish Patel <satish.txt@xxxxxxxxx> a écrit : > > I have decided to setup 5 node Ceph storage and following is my > inventory, just tell me is it good to start first cluster for average > load. > > 0. Ceph Bluestore > 1. Journal SSD (Intel DC 3700) > 2. OSD disk Samsung 850 Pro 500GB > 3. OSD disk SATA 500GB (7.5k RPMS) > 4. 2x10G NIC (separate public/cluster with JumboFrame) > > Do you thin this combination is good for average load? > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com