Balanced use of HDD and SSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello all,

a year ago we started with a 3-node-Cluster for Ceph with 21 HDD and 3
SSD, which we installed with Cephadm, configuring the disks with
`ceph orch apply osd --all-available-devices`

Over the time the usage grew quite significantly: now we have another
5 nodes with 8-12 HDD and 1-2 SSD each, the integration worked without
any problems with `ceph orch add host`. Now we wonder if the HDD and
SSD are used as recommended, so that access is fast, but without

My questions: how can I check what the data_devices and db_devices
are? Can we still apply a setup as for example the second one in this
documentation? https://docs.ceph.com/en/latest/cephadm/osd/#the-simple-case

Some technical details: Xeans with plenty RAM and Cores, Ceph 16.2.5
with mostly default configuration, Ubuntu 20.04, separated cluster and
public network (both 10 Gb), Usage as RBD (Qemu), Cephfs, and Ceph
object gateway. (The latter is surprisingly slow, but I want to sort
out the problem with the underlying configuration first.)

Thanks for any helpful responses,
Erich
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux