Re: CRUSH algorithm and its debug method

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 3 Oct 2016, Sage Weil wrote:
> > 2. Currently, the reweight params in crushmap is memoryless (e.g we
> > balance our data by reducing reweight, which will be lost after this
> > osd DOWN and OUT automatically. And we mark its IN again because
> > currently ceph osd in directly marks the reweight to 1.0 and out marks
> > the reweight to 0.0).  It is quite awkward when we use ceph osd
> > reweight-by-utilization to make data balance (If some osds down and
> > out, our previous effort is totally lost).   So I think marking osd
> > "in"  does not simply modify reweight to "1.0". Actually, we can
> > iteration the previous osdmap and find out the value of the reweight
> > or records it anywhere we can retrieve this value again?
> 
> The old value is stored here
> 
> https://github.com/ceph/ceph/blob/master/src/osd/OSDMap.h#L89
> 
> and restored when the OSD is marked back up, although IIRC there is a 
> config option that controls when the old value is stored (it might only 
> happen when the osd is marked out automatically, not when it is done 
> manually?).  That behavior could be changed, though.

...and I just found this sitting on my todo list.  Here is a fix:

	https://github.com/ceph/ceph/pull/11293

sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux