Re: HDD <-> OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On 22.06.21 11:55, Thomas Roth wrote:
Hi all,

newbie question:

The documentation seems to suggest that with ceph-volume, one OSD is created for each HDD (cf. 4-HDD-example in https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/)

This seems odd: what if a server has a finite number of disks? I was going to try cephfs on ~10 servers with 70 HDD each. That would make each system having to deal with 70 OSDs, on 70 LVs?

Really no aggregation of the disks?


The recommendation is one OSD per HDD. And most deployment tools (ceph-ansible / cephadm) follow this guide. The main reason is keeping the single failure unit small. Using raid with several HDDs might result in 10-20 TB of data to be recovered if a single disk fails. The same goes for large storage boxes; just imagine a complete unrecoverable failure of a box with 70x 10TB disks. More data to recover results in longer recovery time, which increases the probability for a second or (usually fatal) third failure.


If you plan to use other setups you either need to adopt the deployment tools (should be possible with ceph-ansible, difficult in case of cephadm), or deploy the OSDs manually.


We use manual deployment and linux software raid with raid level 0 and two/three disks in larger storage boxes, also due to the memory /CPU core requirements.


Regards,

Burkhard

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux