Re: ceph recovery policy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On Thu, 28 May 2015, Ugis wrote:
> Hi!
> 
> I have been watching changes in "ceph -s" output for a while and
> noticed that in this line:
>  3324/7888981 objects degraded (0.042%); 1995972/7888981 objects
> misplaced (25.301%)
> rather misplaced object count drops constantly, but degraded object
> count drops just occasionally.
> 
> Quick googling did not reveal any mention of recovery/rebalancing
> policy(except client vs recovery io).
> Is there any?
> I mean - it would be reasonable to recover degraded objects in first
> place and only then to rebalance misplaced objects as that would
> reduce risk of losing data.
> 
> That could be default recovey policy, but if other policy needed -
> that could be specifyable via some recovery variable, like it is now
> in recovery section here:
> http://dachary.org/loic/ceph-doc/rados/configuration/osd-config-ref/
> 
> P.S. no active client I/O currently present.

If the recovery and rebalancing ever contend for the same node, recovery 
is prioritized.  What is probaly happening in your case is that the 
rebalancing is happening elsewhere in the cluster where nothing is 
degraded...

sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux