Re: Hardware feedback before purchasing for a PoC

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ignacio,

El 9/3/20 a las 15:19, Ignacio Ocampo escribió:
I was considering 1 GB per TB, but I will switch that to 4 GB per TB to consider blue storage.
Much better. :)

Regarding the number of devices, Will 2 disks per node help with the cluster speed?
Yes, it will help. Generally speaking, more disks=better performance. You may consider using the server's OS m.2 slot disk for Bluestore's db to improve performance; if you do so, make sure to use enterprise SSDs (like Intel D3-S4510 for example). You may need to get a better AM4 CPU too, specially if using SSD for Bluestore:

https://cpu.userbenchmark.com/Compare/AMD-Ryzen-5-2600-vs-AMD-A8-9600-APU-2016-DBR/3955vsm339630
https://cpu.userbenchmark.com/Compare/AMD-Ryzen-5-3600-vs-AMD-A8-9600-APU-2016-DBR/4040vsm339630


My ultimate goal is to provide support to a second project which will be an OpenStack deployment.
It will be a small deployment right? :-) Because the config you referenced is quite underpowered.

How many VMs? What level of IO? (IO per second?)

You'll be able to perform about 100-150 IOPS with one HDD per server...

Cheers
Eneko




Oliver Audrey, I prefer to invest in hardware rather than renting it, since I want to create a private cloud.

Thanks!

*Ignacio Ocampo*

On Mar 9, 2020, at 4:12 AM, Eneko Lacunza <elacunza@xxxxxxxxx> wrote:

Hola Ignacio,

El 9/3/20 a las 3:00, Ignacio Ocampo escribió:
Hi team, I'm planning to invest in hardware for a PoC and I would like your
feedback before the purchase:

The goal is to deploy a *16TB* storage cluster, with *3 replicas* thus *3
nodes*.

System configuration: https://pcpartpicker.com/list/cfDpDx ($400 USD per
node)

Some notes about the configuration:

   - 4-core processor for 4 OSD daemons
   - 8GB RAM for the first 4TB of storage, that will increase to 16GB of
   RAM when 16TB of storage.
   - Motherboard:
   - 4 x SATA 6 Gb/s (one per each OSD disk)
      - 2 x PCI-E x1 Slots (1 will be used for an additional Gigabit
      Ethernet)
      - 1 x M.2 Slots for the host OS
      - Ram can increase up-to 32 GB, and another SATA 6b/s controller can
      be added on PCI-E x1 for growth up to *32TB*

As noted, the plan is to deploy nodes with *4TB* and gradually add *12TB* as
needed, memory also should be increased to *16GB* after *8TB* threshold.
Edit
Questions to validate before the purchase

1. Does the hardware components make sense for the *16TB* growth projection?

2. Is it easy to gradually add more capacity to each node (*4TB* each time
per node)?
Thanks for your support!

You may find that having only one disk per node, will make the storage quite slow.

I think you're low on RAM to use bluestore, specially when adding new disks, please check:
https://docs.ceph.com/docs/master/start/hardware-recommendations/

It is easy to add additional disks, yes.

You may also want to try some solution for easier deployment for such a small cluster, I can recommend Proxmox VE (https://www.proxmox.com) but there are others.

Cheers and good luck
Eneko

--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943569206
Astigarragako bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa)
www.binovo.es
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943569206
Astigarragako bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa)
www.binovo.es

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux