Quoting Radhakrishnan2 S (radhakrishnan2.s@xxxxxxx): > In addition, about putting all kinds of disks, putting all drives in > one box was done for two reasons, > > 1. Avoid CPU choking This depends only on what kind of hardware you select and how you configure it. You can (if need be) restrict #CPU the ceph daemons get with cgroups for example ... (or use containers). >2. Example: If my cluster has 20 nodes in total, > then all 20 nodes will have NVMe SSD and NL-SAS, this way I'll get > more capacity and performance when compared to homogeneous nodes. If I > have to break the 20 nodes into 5 NVMe based, 5 SSD based and > remaining 10 as spindle based with NVMe acting as bcache, then I'm > restricting the count of drives there by lesser IO density / > performance. Please advice in detail based on your production > deployments. The drawback of all types of disk in one box is that all pools in your cluster are affected when one nodes goes down. If your storage needs change in the future than it does not make sense to buy similar boxes. I.e. it's cheaper to buy dedicated boxes for say spinners only if you end up needing that (lower CPU requirements, cheaper boxes). You need to decide if you want max performance or max capactity. More smaller nodes means the overall impact when one node fails is much smaller. Just check what your budget allowys you to buy with "all-on-one" boxes versus "dedicated" boxes. Are you planning on dedicated monitor nodes (I would definately do that)? Gr. Stefan -- | BIT BV https://www.bit.nl/ Kamer van Koophandel 09090351 | GPG: 0xD14839C6 +31 318 648 688 / info@xxxxxx _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com