Re: What is the advice, one disk per OSD, or multiple disks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Just to expand on the answer of Robert.

If all devices are of the same class (hdd/ssd/nvme) then a one on one relationship is most likely the best choice.

If you have very fast devices it might be good the have multiple OSDs on one devices, at the cost of some complexity.

If you have devices of multiple classes (hdds and ssds for example) it might be a good idea to offload some of the OSDs internal data onto the faster devices. This is done by offloading the write ahead log and/or the rocksdb to the faster device.

Kind regards,

Wout
42on

________________________________________
From: Robert Sander <r.sander@xxxxxxxxxxxxxxxxxxx>
Sent: Monday, September 21, 2020 3:09 PM
To: ceph-users@xxxxxxx
Subject:  Re: What is the advice, one disk per OSD, or multiple disks

On 21.09.20 14:29, Kees Bakker wrote:

> Being new to CEPH, I need some advice how to setup a cluster.
> Given a node that has multiple disks, should I create one OSD for
> all disks, or is it better to have one OSD per disk.

The general rule is one OSD per disk.

There may be an exception with very fast devices like NVMe where one OSD
is not able to fully use the available IO bandwidth. NVMes can have two
OSDs per device.

But you would not create one OSD over multiple devices.

Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux