Re: OSD recovery priority ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 24 Jan 2018, Piotr Dałek wrote:
> On 18-01-24 10:54 AM, Vincent Godin wrote:
> > I have a cluster in recovery state and i notice that few pgs are only
> > hold bye one OSD and a lot hold by two OSDs (pool configured with size
> > =3 & min=1). After few days, i'm surprised to see that the few pgs
> > hold only by one OSD are still there. It seems there is no high
> > priority for pg in critical state (1 on 3) compare to those in warn
> > state (2 on 3). Am i wrong or could it be a near feature ?
> > My Cluster is in Jewel 10.2.6
> 
> http://docs.ceph.com/docs/master/release-notes/#v10-2-7-jewel
> https://github.com/ceph/ceph/pull/13232 might be closest to what you need.
> You'd still need to bump up min_size to 2, though.

Note that there is also an issue in luminous where a lower-priority 
recovery may already be in progress when a higher-priority PG becomes 
ready to recovery, the low-priority PG is not preempted. This is fixed in 
master for mimic but the backport is nontrivial.

sage

[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux