Re: Single disk per OSD ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 17-12-01 12:23 PM, Maged Mokhtar wrote:
Hi all,

I believe most exiting setups use 1 disk per OSD. Is this going to be the most common setup in the future ? With the move to lvm, will this prefer the use of multiple disks per OSD ? On the other side i also see nvme vendors recommending multiple OSDs ( 2,4 ) per disk as disks are getting faster for a single OSD process.

Can anyone shed some light/recommendations into this please ?

You don't put more than one OSD on spinning disk because access times will kill your performance - they already do [kill your performance] and asking hdds to do double/triple/quadruple/... duty is only going to make it far more worse. On the other hand, SSD drives have access time so short that they're most often bottlenecked by SSD users and not SSD itself, so it makes perfect sense to put 2-4 OSDs on one OSD. LVM isn't going to change much in that pattern, it may be easier to setup RAID0 HDD OSDs, but that's questionable use case, and OSDs with JBODs under them are counterproductive (single disk failure would be caught by Ceph, but replacing failed drives will be more difficult -- plus, JBOD OSDs significantly extend the damage area once such OSD fails).

--
Piotr Dałek
piotr.dalek@xxxxxxxxxxxx
https://www.ovh.com/us/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux