Re: Questions about using existing HW for PoC cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Been reading "Learning Ceph - Second Edition" (https://learning.oreilly.com/library/view/learning-ceph-/9781787127913/8f98bac7-44d4-45dc-b672-447d162ea604.xhtml) and in Ch. 4 I read this:

"We've noted that Ceph OSDs built with the new BlueStore back end do not require journals. One might reason that additional cost savings can be had by not having to deploy journal devices, and this can be quite true. However, BlueStore does still benefit from provisioning certain data components on faster storage, especially when OSDs are deployed on relatively slow HDDs. Today's investment in fast FileStore journal devices for HDD OSDs is not wasted when migrating to BlueStore. When repaving OSDs as BlueStore devices the former journal devices can be readily re purposed for BlueStore's RocksDB and WAL data. When using SSD-based OSDs, this BlueStore accessory data can reasonably be colocated with the OSD data store. For even better performance they can employ faster yet NVMe or other technloogies for WAL and RocksDB. This approach is not unknown for traditional FileStore journals as well, though it is not inexpensive.Ceph clusters that are fortunate to exploit SSDs as primary OSD dri
 ves usually do not require discrete journal devices, though use cases that require every last bit of performance may justify NVMe journals. SSD clusters with NVMe journals are as uncommon as they are expensive, but they are not unknown."

So can I get by with using a single SATA SSD (size?) per server for RocksDB / WAL if I'm using Bluestore?


> - Is putting the journal on a partition of the SATA drives a real I/O killer? (this is how my Proxmox boxes are set up)
> - If YES to the above, then is a SATA SSD acceptable for journal device, or should I definitely consider PCIe SSD? (I'd have to limit to one per server, which I know isn't optimal, but price prevents otherwise...)
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux