Re: Mix NVME's in a single cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It’s difficult to fully answer your question with the information provided.  Notably, your networking setup and the RAM / CPU SKUs are important inputs.

Assuming that the hosts have or would have sufficient CPU and RAM for the additional OSDs there wouldn’t necessarily be a downside, though you might wish to use a gradual balancing strategy.

The new drives are double the size of the old, so unless you take steps they will get double the PGs and thus double the workload of the existing drives.  But since you aren’t subject to the SATA bottleneck, unless your hosts are PCI Gen 3 and your networking insufficient, I suspect that you’ll be fine.

You could use a custom device class and CRUSH rule to segregate the larger/faster drives into their own pool(s), but if you’re adding capacity for existing use-cases, I’d probably just go for it and celebrate the awesome hardware.


> On Jan 24, 2025, at 9:35 AM, Bruno Gomes Pessanha <bruno.pessanha@xxxxxxxxx> wrote:
> 
> I have a Ceph Reef cluster with 10 hosts with 16 nvme slots but only half
> occupied with 15TB (2400 KIOPS) drives. 80 drives in total.
> I want to add another 80 to fully populate the slots. The question:
> What would be the downside if I expand the cluster with 80 x 30TB (3300
> KIOPS) drives?
> 
> Thank you!
> 
> Bruno
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux