Re: some newbie questions...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On za, 2013-08-31 at 13:34 -0500, Dimitri Maziuk wrote:
> On 2013-08-31 11:36, Dzianis Kahanovich wrote:
> > Johannes Klarenbeek пишет:
> >
> >>>
> >>> 1) i read somewhere that it is recommended to have one OSD per disk in a production environment.
> >>>     is this also the maximum disk per OSD or could i use multiple disks per OSD? and why?
> >>
> >> you could use multiple disks for one OSD if you used some striping and abstract the disk (like LVM, MDRAID, etc). But it wouldn't make sense. One OSD writes into one filesystem, that is usually one disk in a production environment. Using RAID under it wouldn't increase neither reliability nor performance drastically.
> >
> > I see some sense in RAID 0: single ceph-osd daemon per node (but still
> > disk-per-osd self). But if you have relative few [planned] cores per task on
> > node - you can think about it.
> 
> Raid-0: single disk failure kills the entire filesystem, off-lines the 
> osd and triggers a cluster-wide resync. Actual raid: single disk failure 
> does not affect the cluster in any way.

RAID-controllers also add a lot of manageability into the mix.  The fact
that a chassis starts beeping and indicates exactly which disk needs
replacing, managing automatic rebuild after replacement, makes
operations much easier, even by less technical personnel.  Also, if you
have fast disks and a good RAID-controller, it should offload the entire
rebuild-process from the node's main CPU without a performance-hit on
the Ceph-cluster or node.  As already said, OSDs are expensive on the
resources, too.  Having too many of them on one node and then having an
entire node fail, can cause a lot of traffic and load on the remaining
nodes while things rebalance.


   Regards,

      Oliver

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux