Re: Yet another hardware planning question ...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On tor, 2016-10-13 at 10:29 -0500, Brady Deetz wrote:
> 6 SSD per nvme journal might leave your journal in contention. Canyou
> provide the specific models you will be using?

Well, according to Dell, the card is called "Dell 1.6TB, NVMe, Mixed
Use Express Flash, PM1725", but the specs for the card is listed here h
ttp://i.dell.com/sites/doccontent/shared-content/data-
sheets/en/Documents/Dell-PowerEdge-Express-Flash-NVMe-Mixed-Use-PCIe-
SSD.pdf

Forgive me for my poor English here, but when you say "leave your
journal in contention", what exactly do you mean by that ?

Best regards, 
Patrik Martinsson
Sweden


> On Oct 13, 2016 10:23 AM, "Patrik Martinsson"
> <patrik.martinsson@xxxxxxxxxxxxx> wrote:
> > Hello everyone, 
> > 
> > We are in the process of buying hardware for our first ceph-
> > cluster. We
> > will start with some testing and do some performance measurements
> > to
> > see that we are on the right track, and once we are satisfied with
> > our
> > setup we'll continue to grow in it as time comes along.
> > 
> > Now, I'm just seeking some thoughts on our future hardware, I know
> > there are a lot of these kind of questions out there, so please
> > forgive
> > me for posting another one. 
> > 
> > Details, 
> > - Cluster will be in the same datacenter, multiple racks as we
> > grow 
> > - Typical workload (this is incredible vague, forgive me again)
> > would
> > be an Openstack environment, hosting 150~200 vms, we'll have quite
> > a
> > few databases for Jira/Confluence/etc. Some workload coming from
> > Stash/Bamboo agents, puppet master/foreman, and other typical "core
> > infra stuff". 
> > 
> > Given this prerequisites just given, the going all SSD's (and NVME
> > for
> > journals) may seem as overkill(?), but we feel like we can afford
> > it
> > and it will be a benefit for us in the future. 
> > 
> > Planned hardware, 
> > 
> > Six nodes to begin with (which would give us a cluster size of
> > ~46TB,
> > with a default replica of three (although probably a bit bigger
> > since
> > the vm's would be backed by a erasure coded pool) will look
> > something
> > like, 
> >  - 1x  Intel E5-2695 v4 2.1GHz, 45M Cache, 18 Cores
> >  - 2x  Dell 64 GB RDIMM 2400MT
> >  - 12x Dell 1.92TB Mix Use MLC 12Gbps (separate OS disks) 
> >  - 2x  Dell 1.6TB NVMe Mixed usage (6 osd's per NVME)
> > 
> > Network between all nodes within a rack will be 40Gbit (and 200Gbit
> > between racks), backed by Junipers QFX5200-32C.
> > 
> > Rather then asking the question, 
> > - "Does this seems reasonable for our workload ?", 
> > 
> > I want to ask,
> > - "Is there any reason *not* have a setup like this, is there any
> > obvious bottlenecks or flaws that we are missing or could this may
> > very
> > well work as good start (and the ability to grow with adding more
> > servers) ?"
> > 
> > When it comes to workload-wise-issues, I think we'll just have to
> > see
> > and grow as we learn. 
> > 
> > We'll be grateful for any input, thoughts, ideas suggestions, you
> > name
> > it. 
> > 
> > Best regards, 
> > Patrik Martinsson,
> > Sweden
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > 
-- 
Kindly regards,
Patrik Martinsson
0707 - 27 64 96
System Administrator Linux
Genuine Happiness
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux