Re: Ceph Cluster Deployment - Recommendation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Anthony,

What is recommended number of servers for 2 datahalls for quality cluster?  replicated pools x 4 will not work here?

Thanks,

Amar

On 18/12/23 21:12, Anthony D'Atri wrote:

EXTERNAL: Do not click links or open attachments if you do not recognize the sender.

Four servers doth not a quality cluster make.  This setup will work, but you can't use a reasonable EC profile for your bucket pool.  Aim higher than the party line wrt PG counts esp. for the index pool.







On Dec 18, 2023, at 10:19, Amardeep Singh <amardeep.singh@xxxxxxxxxxxxxx><mailto:amardeep.singh@xxxxxxxxxxxxxx> wrote:

Hi Everyone,

We are in the process of planning a ceph cluster deployment for our data infrastructure.

To provide you with a bit of context, we have deployed hardware across two data halls in our data center, and they are connected via a 10Gb interconnect.

The hardware configuration for 4 x Ceph Cluster (2 x servers in both data halls)


*   2 x AMD EPYC 7513 - 32 Cores / 64 Threads

*   512GB RAM

*   2 x 960 GB (OS DISKS)

*   8x Micron 7450 PRO 7680GB NVMe - PCIe Gen4

*   Intel X550-T2 - 10GbE Dual-Port RJ45 Server Adaptor

Our primary usage is Object Gateway and will be running 4 x RGW Service.

We are aiming to deploy using cephadm and utilize all nodes for MON/MGR/RGW and OSD's

Given our limited experience with Ceph, we are reaching out to the knowledgeable members of this community for recommendations and best practices. We would greatly appreciate any insights or advice you can share regarding the following aspects:

Cluster Topology: Considering our hardware setup with two data halls connected via a 10Gb interconnect, what would be the recommended cluster topology for optimal performance and fault tolerance?

Best Practices for Deployment: Are there any recommended best practices for deploying Ceph in a similar environment? Any challenges we should be aware of?

Thank you in advance for your time and assistance.

Regards,
Amar

DISCLAIMER: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error, please notify the sender. This message contains confidential information and is intended only for the individual named. If you are not the named addressee, you should not disseminate, distribute or copy this email. Please notify the sender immediately by email if you have received this email by mistake and delete this email from your system.

If you are not the intended recipient, you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. Thanks for your cooperation.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>
To unsubscribe send an email to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>





DISCLAIMER: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error, please notify the sender. This message contains confidential information and is intended only for the individual named. If you are not the named addressee, you should not disseminate, distribute or copy this email. Please notify the sender immediately by email if you have received this email by mistake and delete this email from your system.

If you are not the intended recipient, you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. Thanks for your cooperation.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux