Re: Degraded objects afte: ceph osd in $osd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I reply to myself.

> I've added a new node, added slowly 4 new OSD, but in the meantime an
> OSD (not the new, not the node to remove) died. My situation now is:
>  root@blackpanther:~# ceph osd df tree
>  ID WEIGHT   REWEIGHT SIZE   USE   AVAIL  %USE  VAR  TYPE NAME               
>  -1 21.41985        -  5586G 2511G  3074G     0    0 root default            
>  -2  5.45996        -  5586G 2371G  3214G 42.45 0.93     host capitanamerica 
>   0  1.81999  1.00000  1862G  739G  1122G 39.70 0.87         osd.0           
>   1  1.81999  1.00000  1862G  856G  1005G 46.00 1.00         osd.1           
>  10  0.90999  1.00000   931G  381G   549G 40.95 0.89         osd.10          
>  11  0.90999  1.00000   931G  394G   536G 42.35 0.92         osd.11          
>  -3  5.03996        -  5586G 2615G  2970G 46.82 1.02     host vedovanera     
>   2  1.39999  1.00000  1862G  684G  1177G 36.78 0.80         osd.2           
>   3  1.81999  1.00000  1862G 1081G   780G 58.08 1.27         osd.3           
>   4  0.90999  1.00000   931G  412G   518G 44.34 0.97         osd.4           
>   5  0.90999  1.00000   931G  436G   494G 46.86 1.02         osd.5           
>  -4  5.45996        -   931G  583G   347G     0    0     host deadpool       
>   6  1.81999  1.00000  1862G  898G   963G 48.26 1.05         osd.6           
>   7  1.81999  1.00000  1862G  839G  1022G 45.07 0.98         osd.7           
>   8  0.90999        0      0     0      0     0    0         osd.8           
>   9  0.90999  1.00000   931G  583G   347G 62.64 1.37         osd.9           
>  -5  5.45996        -  5586G 2511G  3074G 44.96 0.98     host blackpanther   
>  12  1.81999  1.00000  1862G  828G  1033G 44.51 0.97         osd.12          
>  13  1.81999  1.00000  1862G  753G  1108G 40.47 0.88         osd.13          
>  14  0.90999  1.00000   931G  382G   548G 41.11 0.90         osd.14          
>  15  0.90999  1.00000   931G  546G   384G 58.66 1.28         osd.15          
>                 TOTAL 21413G 9819G 11594G 45.85                              
>  MIN/MAX VAR: 0/1.37  STDDEV: 7.37
> 
> Perfectly healthy. But i've tried to, slowly, remove an OSD from
> 'vedovanera', and so i've tried with:
> 	ceph osd crush reweight osd.2 <weight>
> as you can see, i'm arrived to weight 1.4 (from 1.81999), but if i go
> lower than that i catch:
[...]
>             recovery 2/2556513 objects degraded (0.000%)

Seems that the trouble came from osd.8 that was out and down, but not
from the crushmap (still have weight 0.90999).

After removing osd 8 massive rebalance start. After that, now i can
lower weight of OSD for node vedovanera and i've no more degraded
object.

I think i'm starting to understand how concretely the crush algorithm
work. ;-)

-- 
dott. Marco Gaiarin				        GNUPG Key ID: 240A3D66
  Associazione ``La Nostra Famiglia''          http://www.lanostrafamiglia.it/
  Polo FVG   -   Via della Bontà, 7 - 33078   -   San Vito al Tagliamento (PN)
  marco.gaiarin(at)lanostrafamiglia.it   t +39-0434-842711   f +39-0434-842797

		Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
      http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000
	(cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux