HDD <-> OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

newbie question:

The documentation seems to suggest that with ceph-volume, one OSD is created for each HDD (cf. 4-HDD-example in https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/)

This seems odd: what if a server has a finite number of disks? I was going to try cephfs on ~10 servers with 70 HDD each. That would make each system having to deal with 70 OSDs, on 70 LVs?

Really no aggregation of the disks?


Regards,
Thomas
--
--------------------------------------------------------------------
Thomas Roth
Department: IT

GSI Helmholtzzentrum für Schwerionenforschung GmbH
www.gsi.de
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux