Re: HDD <-> OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



That is the idea, what is wrong with this concept? If you aggregate disks, you still aggregate 70 disks, and you still be having 70 disks. 
Everything you do that ceph can't be aware of creates a potential misinterpretation of the reality and make ceph act in a way it should not.



> -----Original Message-----
> Sent: Tuesday, 22 June 2021 11:55
> To: ceph-users@xxxxxxx
> Subject:  HDD <-> OSDs
> 
> Hi all,
> 
> newbie question:
> 
> The documentation seems to suggest that with ceph-volume, one OSD is
> created for each HDD (cf. 4-HDD-example in
> https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-
> ref/)
> 
> This seems odd: what if a server has a finite number of disks? I was
> going to try cephfs on ~10 servers with 70 HDD each. That would make
> each system
> having to deal with 70 OSDs, on 70 LVs?
> 
> Really no aggregation of the disks?
> 
> 
> Regards,
> Thomas
> --
> --------------------------------------------------------------------
> Thomas Roth
> Department: IT
> 
> GSI Helmholtzzentrum für Schwerionenforschung GmbH
> www.gsi.de
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux