Re: HDD <-> OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Den tis 22 juni 2021 kl 11:55 skrev Thomas Roth <t.roth@xxxxxx>:
> Hi all,
> newbie question:
> The documentation seems to suggest that with ceph-volume, one OSD is created for each HDD (cf. 4-HDD-example in
> https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/)
>
> This seems odd: what if a server has a finite number of disks? I was going to try cephfs on ~10 servers with 70 HDD each. That would make each system
> having to deal with 70 OSDs, on 70 LVs?
> Really no aggregation of the disks?

There is nothing inherently wrong with having 70 OSDs if you have 70
drives. As others have said, it brings benefits to allow single faults
to be kept do as small a domain as possible and if you read up on ZFS
guides, I am sure they will say the same thing, hand over each
separate raw device over to zfs, and let zfs place the data in the
optimal way for each raw device, instead of globbing together the lot
and pretend it is one large disks when it actually isn't.

With one huge 70-piece raid0 you might end up having a lot of writes
and reads hit a small number of disks at a time which probably will be
worse for your overall performance.

-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux