Re: Dear Abby: Why Is Architecting CEPH So Hard?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,


On 4/22/20 11:47 PM, cody.schmidt@xxxxxxxxxxxxxxxxxxx wrote:

> Example 1:
> 8x 60-Bay (8TB) Storage nodes (480x 8TB SAS Drives)
> Storage Node Spec: 
> 2x 32C 2.9GHz AMD EPYC
>    - Documentation mentions .5 cores per OSD for throughput optimized. Are they talking about .5 Physical cores or .5 Logical cores?

Does not matter much
CPU is used for recovery as well as for rbd snaptrimming
The real rule: do not have 12 OSDs on a 4 CPU host

>    - Is it better to pick my processors based on a total GHz measurement like 2GHz per OSD?
>    - Would a theoretical 8C at 2GHz serve the same number of OSDs as a 16C at 1GHz? Would Threads be included in this calculation?
Higher frequency leads to lower latency, hence higher performance
To suit your example, I would get 8 cores at 2GHz

> 512GB Memory
>    - I know this is the hot topic because of its role in recoveries. Basically, I am looking for the most generalized practice I can use as a safe number and a metric I can use as a nice to have. 
>    - Is it 1GB of RAM per TB of RAW OSD?
Well
More ram -> more performance, as always
I do have 1GB per TB on my rusty cluster
On my flashed-based cluster, I have between 2.5GB and 7GB per TB
Again, my real rule: 64GB of memory per node, and not more than 12
device slots per node

>    - I know more is better, but what is a number I can use to get started with minimal issues?
> 2x 56Gbit Links
10G is the cheapest, do not go below
25G is cheap too, consider it

> - I think this should be enough given the rule of thumb of 10Gbit for every 12 OSDs.
> 3x MON Node
> MON Node Spec:
> 1x 8C 3.2GHz AMD EPYC
> - I can’t really find good practices around when to increase your core count. Any suggestions?

I have never seen any CPU usage on monitors
I wonder if a dual core would suit perfectly ..
(the higher frequency stuff applies, tho)

> 128GB Memory
>    - What do I need memory for in a MON?
>    - When do I need to expand?
Same as the CPU: a couple of GB where always enought for me

> 2x 480GB Boot SSDs
>    - Any reason to look more closely into the sizing of these drives?
I use 32GB flash-based satadom devices for root device
They are basically SSD, and do not take front slots
As they are never burning up, we never replace them
Ergo, the need to "open" the server is not an issue

> 2x 25Gbit Uplinks
>    - Should these match the output of the storage nodes for any reason?
10G!


> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> 
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux