Re: HDD <-> OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi thomas, just a quick note. If you have a few large osds, ceph will have problems distributing the data based on the number of placement groups and the number of objects per placement group, ...
I recommend reading the concept of placement groups.

___________________________________
Clyso GmbH - Ceph Foundation Member
support@xxxxxxxxx
https://www.clyso.com

Am 22.06.2021 um 12:56 schrieb Thomas Roth:
Thank you all for the clarification!

I just did not grasp the concept before, probably because I am used to those systems that form a layer on top of the local file system. If ceph does it all, down to the magnetic platter, all the better.

Cheers
Thomas

On 6/22/21 12:15 PM, Marc wrote:

That is the idea, what is wrong with this concept? If you aggregate disks, you still aggregate 70 disks, and you still be having 70 disks. Everything you do that ceph can't be aware of creates a potential misinterpretation of the reality and make ceph act in a way it should not.



-----Original Message-----
Sent: Tuesday, 22 June 2021 11:55
To: ceph-users@xxxxxxx
Subject:  HDD <-> OSDs

Hi all,

newbie question:

The documentation seems to suggest that with ceph-volume, one OSD is
created for each HDD (cf. 4-HDD-example in
https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-
ref/)

This seems odd: what if a server has a finite number of disks? I was
going to try cephfs on ~10 servers with 70 HDD each. That would make
each system
having to deal with 70 OSDs, on 70 LVs?

Really no aggregation of the disks?


Regards,
Thomas
--
--------------------------------------------------------------------
Thomas Roth
Department: IT

GSI Helmholtzzentrum für Schwerionenforschung GmbH
www.gsi.de
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux