Re: New OSD with weight 0, rebalance still happen...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Mandi! Paweł Sadowski
  In chel di` si favelave...

> This is most probably due to big difference in weights between your hosts (the
> new one has 20x lower weight than the old ones) which in combination with straw
> algorithm is a 'known' issue.

Ok. I've reweighted back that disk to '1' and status goes back to
HEALTH_OK.


> You could try to increase choose_total_tries in
> your crush map from 50 to some bigger number. The best IMO would be to use
> straw2 (which will cause some rebalance) and then use 'ceph osd crush reweight'
> (instead of 'ceph osd reweight') with small steps to slowly rebalance data onto
> new OSDs.

For now i'm putting in the new disks with 'ceph osd reweight',
probably when i'm on 50% of new disks i'll start to use 'ceph osd crush reweight'
against the old one.

Thanks.

-- 
dott. Marco Gaiarin				        GNUPG Key ID: 240A3D66
  Associazione ``La Nostra Famiglia''          http://www.lanostrafamiglia.it/
  Polo FVG   -   Via della Bontà, 7 - 33078   -   San Vito al Tagliamento (PN)
  marco.gaiarin(at)lanostrafamiglia.it   t +39-0434-842711   f +39-0434-842797

		Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
      http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000
	(cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux