Re: POC Hardware questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/16/21 9:01 AM, Oliver Weinmann wrote:
Dear All,



A questions that probalby has been asked by many other users before. I want to do a POC. For the POC I can use old decomissioned hardware. Currently I have 3 x IBM X3550 M5 with:


1 Dualport 10G NIC
Intel(R) Xeon(R) CPU E5-2637 v3 @ 3.50GHz
64GB RAM
the other two have a slower CPU but more RAM:

Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz
512GB RAM



Of course I can re-arrange the RAM.



The switches are not LACP capable, so I'm planning to use bonding in active-active. For the disks I'm planning on buying 12 x Samsung PM883 1.9TB and use them in an EC pool.


My questions are:



1. Which bonding mode should I choose? balance-alb?

Will you use IPv4? If so, than that might be your best choice. But do test different modes if you feel you are saturating network links and traffic does not get balanced. Use tools like bmon, atop, etc. to monitor bandwith utilisation on the hosts.


2. Are the disks ok for a POC? Or should I rather go with more smaller disks (960GB) e.g. 24 in total?

Disks are fine. I would aim for 1 core per SSD. So if you have 24 cores available, than more disks (OSDs) is "better". Ceph scales best with more OSDs / nodes (given enough RAM / CPU).


3. Are there any drawbacks when using EC pools?

History has proven there are more (corner) cases in EC than in replication (that is less complicated). There is more overhead (probably most visible when recovering (RAM / CPU)). EC pools do not support OMAP, so you end up having replicated pools anyways when you want to use RBD / CephFS [1].




Workload will be mostly VMs (Vsphere / Openstack), but also CephFS with Samba Gateway.


Gr. Stefan

[1]: https://docs.ceph.com/en/latest/rados/operations/erasure-code/#erasure-coding-with-overwrites
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux