Re: Open discussing: Designing 50GB/s CephFS or S3 ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear Martin,

Thanks a lot for insightful suggestions and comments.

Seagate MACH.2 2X14 drive look very interesting. Does this disk appear as two (logical) disks under Linux, thus two osds per drive, or still as a single disk but with more IOPS and bandwidth? 


Samuel



huxiaoyu@xxxxxxxxxxxx
 
From: Martin Verges
Date: 2021-10-22 07:57
To: huxiaoyu@xxxxxxxxxxxx
CC: ceph-users
Subject: Re:  Open discussing: Designing 50GB/s CephFS or S3 ceph cluster
Hello,

if you would choose Seagate MACH.2 2X14 drives, you would get much better throughput as well as density. Your RAM could be a bit on the lower end, and for the MACH.2 it definitively would be to low.

You need dedicated metadata drives for S3 or MDS as well. Choose blazing fast NVMe with low capacity and put them in each server.

> How many nodes should be deployed in order to achieve a minimum of 50GB/s, if possible, with the above hardware setting?
About 50 Nodes could be able to deliver it, but strongly depends on many more factors.

> How many Cephfs MDS are required? (suppose 1MB request size), and how many clients are needed for reach a total of 50GB/s?
MDS needs to be scaled more to the number of files than on the requests. Of course, the more writes you want to do, the more load they get as well. Just colocate them on the Servers and you are good to scale the active number to your liking.

> From the perspective of getting the maximum bandwidth, which one should i choose, CephFS or Ceph S3?
Choose what's best for your application / use case scenario.

--
Martin Verges
Managing director

Mobile: +49 174 9335695  | Chat: https://t.me/MartinVerges

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io | YouTube: https://goo.gl/PGE1Bx


On Thu, 21 Oct 2021 at 18:24, huxiaoyu@xxxxxxxxxxxx <huxiaoyu@xxxxxxxxxxxx> wrote:
Dear Cephers,

I am thinking of designing a cephfs or S3 cluster, with a target to achieve a minimum of 50GB/s (write) bandwidth. For each node, I prefer 4U 36x 3.5" Supermicro server with 36x 12TB 7200K RPM HDDs, 2x Intel P4610 1.6TB NVMe SSD as DB/WAL, a single CPU socket with AMD 7302, and 256GB DDR4 memory. Each node comes with 2x 25Gb networking in mode 4 bonded. 8+3 EC will be used. 

My questions are the following: 

1   How many nodes should be deployed in order to achieve a minimum of 50GB/s, if possible, with the above hardware setting?

2   How many Cephfs MDS are required? (suppose 1MB request size), and how many clients are needed for reach a total of 50GB/s?

3   From the perspective of getting the maximum bandwidth, which one should i choose, CephFS or Ceph S3?

Any comments, suggestions, or improvement tips are warmly welcome 

best regards,

Samuel



huxiaoyu@xxxxxxxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux