Re: Ceph space problem, garbage collector ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



and for more visibility (I hope :D), the osd tree :


# id    weight  type name       up/down reweight
-8      11.65   root SSDroot
-33     5.8             datacenter SSDrbx1
-32     5.8                     room SSDs01
-31     5.8                             net SSD188-165-15
-30     5.8                                     rack SSD01B04
-29     5.8                                             host skullface
50      0.9                                                     osd.50  up      1       
51      0.85                                                    osd.51  up      1       
52      1.05                                                    osd.52  up      1       
53      1                                                       osd.53  up      1       
54      1                                                       osd.54  up      1       
55      1                                                       osd.55  up      1       
-27     5.85            datacenter SSDrbx2
-34     5.85                    room SSDs31
-35     5.85                            net SSD5-135-134
-36     5.85                                    rack SSD31B22
-37     5.85                                            host myra
56      1.1                                                     osd.56  up      1       
57      1.1                                                     osd.57  up      1       
58      1                                                       osd.58  up      1       
59      0.9                                                     osd.59  up      1       
60      0.9                                                     osd.60  up      1       
61      0.85                                                    osd.61  up      1       
-1      73.44   root SASroot
-100    24.48           datacenter SASrbx1
-90     24.48                   room SASs15
-72     24.48                           net SAS188-165-15
-40     24.48                                   rack SAS15B01
-17     24.48                                           host dragan
70      2.72                                                    osd.70  up      1       
71      2.72                                                    osd.71  up      1       
72      2.72                                                    osd.72  up      1       
73      2.72                                                    osd.73  up      1       
74      2.72                                                    osd.74  up      1       
75      2.72                                                    osd.75  up      1       
76      2.72                                                    osd.76  up      1       
77      2.72                                                    osd.77  up      1       
78      2.72                                                    osd.78  up      1       
-101    48.96           datacenter SASrbx2
-13     24.48                   room SASs31
-14     24.48                           net SAS178-33-62
-15     24.48                                   rack SAS31A10
-16     24.48                                           host taman
49      2.72                                                    osd.49  up      1       
62      2.72                                                    osd.62  up      1       
63      2.72                                                    osd.63  up      1       
64      2.72                                                    osd.64  up      0       
65      2.72                                                    osd.65  down    0       
66      2.72                                                    osd.66  up      1       
67      2.72                                                    osd.67  up      1       
68      2.72                                                    osd.68  up      1       
69      2.72                                                    osd.69  up      1       
-12     24.48                   room SASs34
-11     24.48                           net SAS5-135-135
-10     24.48                                   rack SAS34A14
-9      24.48                                           host kaino
40      2.72                                                    osd.40  up      1       
41      2.72                                                    osd.41  up      1       
42      2.72                                                    osd.42  up      1       
43      2.72                                                    osd.43  up      1       
44      2.72                                                    osd.44  up      1       
45      2.72                                                    osd.45  up      1       
46      2.72                                                    osd.46  up      1       
47      2.72                                                    osd.47  up      1       
48      2.72                                                    osd.48  up      1       




Le mardi 10 septembre 2013 à 21:01 +0200, Olivier Bonvalet a écrit :
> I removed some garbage about hosts faude / rurkh / murmillia (they was
> temporarily added because cluster was full). So the "clean" CRUSH map :
> 
> 
> # begin crush map
> tunable choose_local_tries 0
> tunable choose_local_fallback_tries 0
> tunable choose_total_tries 50
> 
> # devices
> device 0 device0
> device 1 device1
> device 2 device2
> device 3 device3
> device 4 device4
> device 5 device5
> device 6 device6
> device 7 device7
> device 8 device8
> device 9 device9
> device 10 device10
> device 11 device11
> device 12 device12
> device 13 device13
> device 14 device14
> device 15 device15
> device 16 device16
> device 17 device17
> device 18 device18
> device 19 device19
> device 20 device20
> device 21 device21
> device 22 device22
> device 23 device23
> device 24 device24
> device 25 device25
> device 26 device26
> device 27 device27
> device 28 device28
> device 29 device29
> device 30 device30
> device 31 device31
> device 32 device32
> device 33 device33
> device 34 device34
> device 35 device35
> device 36 device36
> device 37 device37
> device 38 device38
> device 39 device39
> device 40 osd.40
> device 41 osd.41
> device 42 osd.42
> device 43 osd.43
> device 44 osd.44
> device 45 osd.45
> device 46 osd.46
> device 47 osd.47
> device 48 osd.48
> device 49 osd.49
> device 50 osd.50
> device 51 osd.51
> device 52 osd.52
> device 53 osd.53
> device 54 osd.54
> device 55 osd.55
> device 56 osd.56
> device 57 osd.57
> device 58 osd.58
> device 59 osd.59
> device 60 osd.60
> device 61 osd.61
> device 62 osd.62
> device 63 osd.63
> device 64 osd.64
> device 65 osd.65
> device 66 osd.66
> device 67 osd.67
> device 68 osd.68
> device 69 osd.69
> device 70 osd.70
> device 71 osd.71
> device 72 osd.72
> device 73 osd.73
> device 74 osd.74
> device 75 osd.75
> device 76 osd.76
> device 77 osd.77
> device 78 osd.78
> 
> # types
> type 0 osd
> type 1 host
> type 2 rack
> type 3 net
> type 4 room
> type 5 datacenter
> type 6 root
> 
> # buckets
> host dragan {
> 	id -17		# do not change unnecessarily
> 	# weight 24.480
> 	alg straw
> 	hash 0	# rjenkins1
> 	item osd.70 weight 2.720
> 	item osd.71 weight 2.720
> 	item osd.72 weight 2.720
> 	item osd.73 weight 2.720
> 	item osd.74 weight 2.720
> 	item osd.75 weight 2.720
> 	item osd.76 weight 2.720
> 	item osd.77 weight 2.720
> 	item osd.78 weight 2.720
> }
> rack SAS15B01 {
> 	id -40		# do not change unnecessarily
> 	# weight 24.480
> 	alg straw
> 	hash 0	# rjenkins1
> 	item dragan weight 24.480
> }
> net SAS188-165-15 {
> 	id -72		# do not change unnecessarily
> 	# weight 24.480
> 	alg straw
> 	hash 0	# rjenkins1
> 	item SAS15B01 weight 24.480
> }
> room SASs15 {
> 	id -90		# do not change unnecessarily
> 	# weight 24.480
> 	alg straw
> 	hash 0	# rjenkins1
> 	item SAS188-165-15 weight 24.480
> }
> datacenter SASrbx1 {
> 	id -100		# do not change unnecessarily
> 	# weight 24.480
> 	alg straw
> 	hash 0	# rjenkins1
> 	item SASs15 weight 24.480
> }
> host taman {
> 	id -16		# do not change unnecessarily
> 	# weight 24.480
> 	alg straw
> 	hash 0	# rjenkins1
> 	item osd.49 weight 2.720
> 	item osd.62 weight 2.720
> 	item osd.63 weight 2.720
> 	item osd.64 weight 2.720
> 	item osd.65 weight 2.720
> 	item osd.66 weight 2.720
> 	item osd.67 weight 2.720
> 	item osd.68 weight 2.720
> 	item osd.69 weight 2.720
> }
> rack SAS31A10 {
> 	id -15		# do not change unnecessarily
> 	# weight 24.480
> 	alg straw
> 	hash 0	# rjenkins1
> 	item taman weight 24.480
> }
> net SAS178-33-62 {
> 	id -14		# do not change unnecessarily
> 	# weight 24.480
> 	alg straw
> 	hash 0	# rjenkins1
> 	item SAS31A10 weight 24.480
> }
> room SASs31 {
> 	id -13		# do not change unnecessarily
> 	# weight 24.480
> 	alg straw
> 	hash 0	# rjenkins1
> 	item SAS178-33-62 weight 24.480
> }
> host kaino {
> 	id -9		# do not change unnecessarily
> 	# weight 24.480
> 	alg straw
> 	hash 0	# rjenkins1
> 	item osd.40 weight 2.720
> 	item osd.41 weight 2.720
> 	item osd.42 weight 2.720
> 	item osd.43 weight 2.720
> 	item osd.44 weight 2.720
> 	item osd.45 weight 2.720
> 	item osd.46 weight 2.720
> 	item osd.47 weight 2.720
> 	item osd.48 weight 2.720
> }
> rack SAS34A14 {
> 	id -10		# do not change unnecessarily
> 	# weight 24.480
> 	alg straw
> 	hash 0	# rjenkins1
> 	item kaino weight 24.480
> }
> net SAS5-135-135 {
> 	id -11		# do not change unnecessarily
> 	# weight 24.480
> 	alg straw
> 	hash 0	# rjenkins1
> 	item SAS34A14 weight 24.480
> }
> room SASs34 {
> 	id -12		# do not change unnecessarily
> 	# weight 24.480
> 	alg straw
> 	hash 0	# rjenkins1
> 	item SAS5-135-135 weight 24.480
> }
> datacenter SASrbx2 {
> 	id -101		# do not change unnecessarily
> 	# weight 48.960
> 	alg straw
> 	hash 0	# rjenkins1
> 	item SASs31 weight 24.480
> 	item SASs34 weight 24.480
> }
> root SASroot {
> 	id -1		# do not change unnecessarily
> 	# weight 73.440
> 	alg straw
> 	hash 0	# rjenkins1
> 	item SASrbx1 weight 24.480
> 	item SASrbx2 weight 48.960
> }
> host skullface {
> 	id -29		# do not change unnecessarily
> 	# weight 5.800
> 	alg straw
> 	hash 0	# rjenkins1
> 	item osd.50 weight 0.900
> 	item osd.51 weight 0.850
> 	item osd.52 weight 1.050
> 	item osd.53 weight 1.000
> 	item osd.54 weight 1.000
> 	item osd.55 weight 1.000
> }
> rack SSD01B04 {
> 	id -30		# do not change unnecessarily
> 	# weight 5.800
> 	alg straw
> 	hash 0	# rjenkins1
> 	item skullface weight 5.800
> }
> net SSD188-165-15 {
> 	id -31		# do not change unnecessarily
> 	# weight 5.800
> 	alg straw
> 	hash 0	# rjenkins1
> 	item SSD01B04 weight 5.800
> }
> room SSDs01 {
> 	id -32		# do not change unnecessarily
> 	# weight 5.800
> 	alg straw
> 	hash 0	# rjenkins1
> 	item SSD188-165-15 weight 5.800
> }
> datacenter SSDrbx1 {
> 	id -33		# do not change unnecessarily
> 	# weight 5.800
> 	alg straw
> 	hash 0	# rjenkins1
> 	item SSDs01 weight 5.800
> }
> host myra {
> 	id -37		# do not change unnecessarily
> 	# weight 5.850
> 	alg straw
> 	hash 0	# rjenkins1
> 	item osd.56 weight 1.100
> 	item osd.57 weight 1.100
> 	item osd.58 weight 1.000
> 	item osd.59 weight 0.900
> 	item osd.60 weight 0.900
> 	item osd.61 weight 0.850
> }
> rack SSD31B22 {
> 	id -36		# do not change unnecessarily
> 	# weight 5.850
> 	alg straw
> 	hash 0	# rjenkins1
> 	item myra weight 5.850
> }
> net SSD5-135-134 {
> 	id -35		# do not change unnecessarily
> 	# weight 5.850
> 	alg straw
> 	hash 0	# rjenkins1
> 	item SSD31B22 weight 5.850
> }
> room SSDs31 {
> 	id -34		# do not change unnecessarily
> 	# weight 5.850
> 	alg straw
> 	hash 0	# rjenkins1
> 	item SSD5-135-134 weight 5.850
> }
> datacenter SSDrbx2 {
> 	id -27		# do not change unnecessarily
> 	# weight 5.850
> 	alg straw
> 	hash 0	# rjenkins1
> 	item SSDs31 weight 5.850
> }
> root SSDroot {
> 	id -8		# do not change unnecessarily
> 	# weight 11.650
> 	alg straw
> 	hash 0	# rjenkins1
> 	item SSDrbx1 weight 5.800
> 	item SSDrbx2 weight 5.850
> }
> 
> # rules
> rule data {
> 	ruleset 0
> 	type replicated
> 	min_size 1
> 	max_size 10
> 	step take SASroot
> 	step chooseleaf firstn 0 type net
> 	step emit
> }
> rule metadata {
> 	ruleset 1
> 	type replicated
> 	min_size 1
> 	max_size 10
> 	step take SASroot
> 	step chooseleaf firstn 0 type host
> 	step emit
> }
> rule rbd {
> 	ruleset 2
> 	type replicated
> 	min_size 1
> 	max_size 10
> 	step take SASroot
> 	step chooseleaf firstn 0 type net
> 	step emit
> }
> rule SSDperOSD {
> 	ruleset 3
> 	type replicated
> 	min_size 1
> 	max_size 10
> 	step take SSDroot
> 	step choose firstn 0 type osd
> 	step emit
> }
> rule SSDperNetwork {
> 	ruleset 6
> 	type replicated
> 	min_size 1
> 	max_size 10
> 	step take SSDroot
> 	step chooseleaf firstn 0 type net
> 	step emit
> }
> rule SASperHost {
> 	ruleset 4
> 	type replicated
> 	min_size 1
> 	max_size 10
> 	step take SASroot
> 	step chooseleaf firstn 0 type host
> 	step emit
> }
> rule SASperNetwork {
> 	ruleset 5
> 	type replicated
> 	min_size 1
> 	max_size 10
> 	step take SASroot
> 	step chooseleaf firstn 0 type net
> 	step emit
> }
> rule SSDperOSDfirst {
> 	ruleset 7
> 	type replicated
> 	min_size 1
> 	max_size 10
> 	step take SSDroot
> 	step choose firstn 1 type osd
> 	step emit
> 	step take SASroot
> 	step chooseleaf firstn -1 type net
> 	step emit
> }
> 
> # end crush map
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux