>>>> I have a Ceph Reef cluster with 10 hosts with 16 nvme slots >>>> but only half occupied with 15TB (2400 KIOPS) drives. 80 >>>> drives in total. I want to add another 80 to fully populate >>>> the slots. The question: What would be the downside if I >>>> expand the cluster with 80 x 30TB (3300 KIOPS) drives? Most previous replies have focused on potential capacity bottlenecks even ifsome have mentioned PGs and balancing. I reckon that balancing is by far the biggest issue you are likely to have because most Ceph releases (I do not know about Reef) have difficulty balancing across drives of different sizes even with configuration changes. Possible solutions/workarounds: * Assign different CRUSH weights. This configuration change is "supposed" to work. * Assign the 30TB drives to a different class and use them for new "pools". * Split each 30TB drive into two OSDs. Not a good idea for HDDs of course but these are low latency SSDs. The main other problem with large capacity OSDs is the size of PGs, which can become very large with the default targets numbers of PGs, and a previous commenter mentioned that. I think that the current configuration style where one sets the number of PGs rather than the size of PGs leads people astray. In general my impression is that current Ceph defaults and its very design (a single level of grouping: PGs) were meant to be used with OSDs at most 1TB in size and larger OSDs are anyhow not a good idea, but of course there are many people who know better, and good luck to them. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx