Re: Yet another hardware planning question ...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



6 SSD per nvme journal might leave your journal in contention. Can you provide the specific models you will be using?


On Oct 13, 2016 10:23 AM, "Patrik Martinsson" <patrik.martinsson@xxxxxxxxxxxxx> wrote:
Hello everyone, 

We are in the process of buying hardware for our first ceph-cluster. We
will start with some testing and do some performance measurements to
see that we are on the right track, and once we are satisfied with our
setup we'll continue to grow in it as time comes along.

Now, I'm just seeking some thoughts on our future hardware, I know
there are a lot of these kind of questions out there, so please forgive
me for posting another one. 

Details, 
- Cluster will be in the same datacenter, multiple racks as we grow 
- Typical workload (this is incredible vague, forgive me again) would
be an Openstack environment, hosting 150~200 vms, we'll have quite a
few databases for Jira/Confluence/etc. Some workload coming from
Stash/Bamboo agents, puppet master/foreman, and other typical "core
infra stuff". 

Given this prerequisites just given, the going all SSD's (and NVME for
journals) may seem as overkill(?), but we feel like we can afford it
and it will be a benefit for us in the future. 

Planned hardware, 

Six nodes to begin with (which would give us a cluster size of ~46TB,
with a default replica of three (although probably a bit bigger since
the vm's would be backed by a erasure coded pool) will look something
like, 
 - 1x  Intel E5-2695 v4 2.1GHz, 45M Cache, 18 Cores
 - 2x  Dell 64 GB RDIMM 2400MT
 - 12x Dell 1.92TB Mix Use MLC 12Gbps (separate OS disks) 
 - 2x  Dell 1.6TB NVMe Mixed usage (6 osd's per NVME)

Network between all nodes within a rack will be 40Gbit (and 200Gbit
between racks), backed by Junipers QFX5200-32C.

Rather then asking the question, 
- "Does this seems reasonable for our workload ?", 

I want to ask,
- "Is there any reason *not* have a setup like this, is there any
obvious bottlenecks or flaws that we are missing or could this may very
well work as good start (and the ability to grow with adding more
servers) ?"

When it comes to workload-wise-issues, I think we'll just have to see
and grow as we learn. 

We'll be grateful for any input, thoughts, ideas suggestions, you name
it. 

Best regards, 
Patrik Martinsson,
Sweden
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux