Re: OSD auto weights

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, 3 Dec 2011, Noah Watkins wrote:
> We have a cluster with disks that have large differences in performance. I
> noticed that wiki page on OSD auto weights
> (http://ceph.newdream.net/wiki/Osd_auto_weight), but it seems to have been
> last updated in 2008. Are there any developments in this area or recommended
> techniques for setting up weights?

It looks like the old code is mostly usable.  The basic idea is that the 
OSD provides the monitor with a weight when it joins the cluster.  On 
mkfs, ceph-osd times how long it takes to write 1 GB and calculates a 
weight based on that.  

The real problem is that the weights aren't normalized, so the calculation 
should be parameterized so that you can specify the expected 
performance that should be 1.0 and have anything less than that be 
proportionally smaller.  (Currently it adjusts the osdmap weight, which is 
in [0,1].)  Alternatively, it could adjust the crush weight, which will 
probably behave better in the long run.

If you just need to kludge something together to make this work now, I'd 
adjust the current code.  Long term, this should go in the osd addition 
code that will eventually by triggered (by chef or udev or an admin) when 
a new osd is added.  Then the process would look like:

 - identify the device(s) for the osd data, journal
 - benchmark it
 - allocate a new osd id (ceph osd create)
 - ceph-osd --mkfs
 - add osd to crush map with appropriate weight (based on benchmark results)
 - start ceph-osd

sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux