Re: Mix NVME's in a single cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Well heck, you’re good to go unless these are converged with compute.

168 threads / 16 OSDs = ~10 threads per OSD, with some left over for OS, observability, etc.  You’re more than good.  Suggest using BIOS settings and TuneD to disable deep C-states, verify with `power top`.  Increased cooling, performance thermal profile.

Disable IOMMU in GRUB defaults, tune the TCP stack, somaxconn, nf_conntrack, etc.

768GB is more than ample.  For 16 OSDs I would nominally spec 192GB.

I might almost split the 30TB SSDs into 2x OSDs each to gain even more parallelism.

> On Jan 24, 2025, at 10:34 AM, Bruno Gomes Pessanha <bruno.pessanha@xxxxxxxxx> wrote:
> 
> ram: 768GB
> cpu: AMD EPYC 9634 84-Core
> 
> On Fri, 24 Jan 2025 at 15:48, Anthony D'Atri <aad@xxxxxxxxxxxxxx <mailto:aad@xxxxxxxxxxxxxx>> wrote:
>> It’s difficult to fully answer your question with the information provided.  Notably, your networking setup and the RAM / CPU SKUs are important inputs.
>> 
>> Assuming that the hosts have or would have sufficient CPU and RAM for the additional OSDs there wouldn’t necessarily be a downside, though you might wish to use a gradual balancing strategy.
>> 
>> The new drives are double the size of the old, so unless you take steps they will get double the PGs and thus double the workload of the existing drives.  But since you aren’t subject to the SATA bottleneck, unless your hosts are PCI Gen 3 and your networking insufficient, I suspect that you’ll be fine.
>> 
>> You could use a custom device class and CRUSH rule to segregate the larger/faster drives into their own pool(s), but if you’re adding capacity for existing use-cases, I’d probably just go for it and celebrate the awesome hardware.
>> 
>> 
>> > On Jan 24, 2025, at 9:35 AM, Bruno Gomes Pessanha <bruno.pessanha@xxxxxxxxx <mailto:bruno.pessanha@xxxxxxxxx>> wrote:
>> > 
>> > I have a Ceph Reef cluster with 10 hosts with 16 nvme slots but only half
>> > occupied with 15TB (2400 KIOPS) drives. 80 drives in total.
>> > I want to add another 80 to fully populate the slots. The question:
>> > What would be the downside if I expand the cluster with 80 x 30TB (3300
>> > KIOPS) drives?
>> > 
>> > Thank you!
>> > 
>> > Bruno
>> > _______________________________________________
>> > ceph-users mailing list -- ceph-users@xxxxxxx <mailto:ceph-users@xxxxxxx>
>> > To unsubscribe send an email to ceph-users-leave@xxxxxxx <mailto:ceph-users-leave@xxxxxxx>
>> 
> 
> 
> 
> --
> Bruno Gomes Pessanha

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux