Re: recommendation for barebones server with 8-12 direct attach NVMe?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> Also in our favour is that the users of the cluster we are currently intending for this have established a practice of storing large objects.

That definitely is in your favor.

> but it remains to be seen how 60x 22TB behaves in practice.

Be sure you don't get SMR drives.

>  and it's hard for it to rebalance.


^ This.

> What is OLC?

QLC SSDs store 33% more data than TLC, 4 voltage levels per cell vs 3.

> Fascinating to hear about destroy-redeploy being safer than a simple restart-recover!

This was Luminous, that dynamic may be different now, esp. with Nautilus async recovery.  

> Agreed. I guess I wanted to add the data point that these kinds of clusters can and do make full sense in certain contexts, and push a little away from "friends don't let friends use HDDs" dogma.

Understood.  Some deployments aren't squeezed for DC space -- today.  But since many HDD deployments are using LFF chassis, the form factor and interface limitations down the road still complicate expansion and SSD utilization.

> For now, we limit individual cloud volumes to 300 IOPs, doubled for those who need it.

I'm curious how many clients / volumes you have vs. number of HDD OSDs and if you're using replication or EC.  If you have relatively few clients per HDD that would definitely improve the dynamic.



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux