Dimitri Maziuk пишет: >>>> 1) i read somewhere that it is recommended to have one OSD per disk in a >>>> production environment. >>>> is this also the maximum disk per OSD or could i use multiple disks per >>>> OSD? and why? >>> >>> you could use multiple disks for one OSD if you used some striping and >>> abstract the disk (like LVM, MDRAID, etc). But it wouldn't make sense. One >>> OSD writes into one filesystem, that is usually one disk in a production >>> environment. Using RAID under it wouldn't increase neither reliability nor >>> performance drastically. >> >> I see some sense in RAID 0: single ceph-osd daemon per node (but still >> disk-per-osd self). But if you have relative few [planned] cores per task on >> node - you can think about it. > > Raid-0: single disk failure kills the entire filesystem, off-lines the osd and > triggers a cluster-wide resync. Actual raid: single disk failure does not affect > the cluster in any way. Usually data distributed per-host, so whole array failure cause only longer cluster resync, but nothing new cluster-wide. -- WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/ _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com