Re: Ceph Cluster Deployment - Recommendation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You may want to think about 100gb between the datahalls and 100gb to the
server. The proxmox guys did some testing and released a paper this month
showing with fast NVMe drives anything short of dual 25gb links using FRR
for routing on this links or 100gb interface you will be bottlenecked by
the network.
https://proxmox.com/images/download/pve/docs/Proxmox-VE-Ceph-Benchmark-202312-rev0.pdf


On Mon, Dec 18, 2023 at 10:20 AM Amardeep Singh <
amardeep.singh@xxxxxxxxxxxxxx> wrote:

>  Hi Everyone,
>
> We are in the process of planning a ceph cluster deployment for our data
> infrastructure.
>
> To provide you with a bit of context, we have deployed hardware across two
> data halls in our data center, and they are connected via a 10Gb
> interconnect.
>
> The hardware configuration for 4 x Ceph Cluster (2 x servers in both data
> halls)
>
>
>   *   2 x AMD EPYC 7513 - 32 Cores / 64 Threads
>
>   *   512GB RAM
>
>   *   2 x 960 GB (OS DISKS)
>
>   *   8x Micron 7450 PRO 7680GB NVMe - PCIe Gen4
>
>   *   Intel X550-T2 - 10GbE Dual-Port RJ45 Server Adaptor
>
> Our primary usage is Object Gateway and will be running 4 x RGW Service.
>
> We are aiming to deploy using cephadm and utilize all nodes for
> MON/MGR/RGW and OSD's
>
> Given our limited experience with Ceph, we are reaching out to the
> knowledgeable members of this community for recommendations and best
> practices. We would greatly appreciate any insights or advice you can share
> regarding the following aspects:
>
> Cluster Topology: Considering our hardware setup with two data halls
> connected via a 10Gb interconnect, what would be the recommended cluster
> topology for optimal performance and fault tolerance?
>
> Best Practices for Deployment: Are there any recommended best practices
> for deploying Ceph in a similar environment? Any challenges we should be
> aware of?
>
> Thank you in advance for your time and assistance.
>
> Regards,
> Amar
>
> DISCLAIMER: This email and any files transmitted with it are confidential
> and intended solely for the use of the individual or entity to whom they
> are addressed. If you have received this email in error, please notify the
> sender. This message contains confidential information and is intended only
> for the individual named. If you are not the named addressee, you should
> not disseminate, distribute or copy this email. Please notify the sender
> immediately by email if you have received this email by mistake and delete
> this email from your system.
>
> If you are not the intended recipient, you are notified that disclosing,
> copying, distributing or taking any action in reliance on the contents of
> this information is strictly prohibited. Thanks for your cooperation.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>


-- 
Zach Underwood (RHCE,RHCSA,RHCT,UACA)
My website <http://zachunderwood.me>
advance-networking.com
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux