Another great thing about lots of small servers vs. few big servers is that you can use erasure coding.
You can save a lot of money by using erasure coding, but performance will have to be evaluated
for your use case.
I'm working with several clusters that are 8-12 servers with 6-10 SSDs each running erasure coding
for VMs with RBD. They perform surprisingly well: ~6-10k IOPS with ~30% cpu load and ~30%
disk IO load.
But that requires at least 7 servers for a reasonable setup and some good benchmarking to evaluate
it for your scenario. Especially the tail latencies can be prohibitive sometimes.
Paul
2018-06-20 14:09 GMT+02:00 Wido den Hollander <wido@xxxxxxxx>:
On 06/20/2018 02:00 PM, Robert Sander wrote:
> On 20.06.2018 13:58, Nick A wrote:
>
>> We'll probably add another 2 OSD drives per month per node until full
>> (24 SSD's per node), at which point, more nodes.
>
> I would add more nodes earlier to achieve better overall performance.
Exactly. Not only performance, but also failure domain.
In a smaller setup I would always choose a 1U node with 8 ~ 10 SSDs per
node.
Wido
>
> Regards
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com