Re: Doubt about AVAIL space on df

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Manuel, find the output of ceph osd df tree command:

# ceph osd df tree
ID  CLASS WEIGHT   REWEIGHT SIZE    USE     AVAIL   %USE  VAR  PGS TYPE NAME
 -7       84.00099        - 85.1TiB 41.6TiB 43.6TiB 48.82 1.00   - root root
 -5       12.00000        - 13.1TiB 5.81TiB 7.29TiB 44.38 0.91   -     rack
rack1
 -1       12.00000        - 13.1TiB 5.81TiB 7.29TiB 44.38 0.91   -
node cpn01
  0  nvme  1.00000  1.00000 1.09TiB  496GiB  621GiB 44.40 0.91 143
    osd.0
  1  nvme  1.00000  1.00000 1.09TiB  489GiB  629GiB 43.72 0.90 141
    osd.1
  2  nvme  1.00000  1.00000 1.09TiB  537GiB  581GiB 48.03 0.98 155
    osd.2
  3  nvme  1.00000  1.00000 1.09TiB  474GiB  644GiB 42.40 0.87 137
    osd.3
  4  nvme  1.00000  1.00000 1.09TiB  532GiB  586GiB 47.57 0.97 153
    osd.4
  5  nvme  1.00000  1.00000 1.09TiB  476GiB  642GiB 42.60 0.87 137
    osd.5
  6  nvme  1.00000  1.00000 1.09TiB  467GiB  650GiB 41.82 0.86 135
    osd.6
  7  nvme  1.00000  1.00000 1.09TiB  544GiB  574GiB 48.65 1.00 157
    osd.7
  8  nvme  1.00000  1.00000 1.09TiB  482GiB  636GiB 43.12 0.88 139
    osd.8
  9  nvme  1.00000  1.00000 1.09TiB  458GiB  660GiB 40.96 0.84 133
    osd.9
 10  nvme  1.00000  1.00000 1.09TiB  514GiB  604GiB 45.97 0.94 148
    osd.10
 11  nvme  1.00000  1.00000 1.09TiB  484GiB  633GiB 43.34 0.89 140
    osd.11
 -6       12.00099        - 13.1TiB 6.02TiB 7.08TiB 45.98 0.94   -     rack
rack2
 -2       12.00099        - 13.1TiB 6.02TiB 7.08TiB 45.98 0.94   -
node cpn02
 12  nvme  1.00000  1.00000 1.09TiB  499GiB  619GiB 44.61 0.91 144
    osd.12
 13  nvme  1.00000  1.00000 1.09TiB  561GiB  557GiB 50.19 1.03 162
    osd.13
 14  nvme  1.00000  1.00000 1.09TiB  577GiB  541GiB 51.60 1.06 167
    osd.14
 15  nvme  1.00000  1.00000 1.09TiB  546GiB  572GiB 48.84 1.00 158
    osd.15
 16  nvme  1.00000  1.00000 1.09TiB  537GiB  580GiB 48.07 0.98 155
    osd.16
 17  nvme  1.00000  1.00000 1.09TiB  508GiB  610GiB 45.41 0.93 147
    osd.17
 18  nvme  1.00000  1.00000 1.09TiB  490GiB  628GiB 43.86 0.90 142
    osd.18
 19  nvme  1.00000  1.00000 1.09TiB  534GiB  584GiB 47.76 0.98 155
    osd.19
 20  nvme  1.00000  1.00000 1.09TiB  467GiB  651GiB 41.80 0.86 134
    osd.20
 21  nvme  1.00000  1.00000 1.09TiB  447GiB  671GiB 40.01 0.82 129
    osd.21
 22  nvme  1.00099  1.00000 1.09TiB  561GiB  556GiB 50.21 1.03 162
    osd.22
 23  nvme  1.00000  1.00000 1.09TiB  441GiB  677GiB 39.45 0.81 127
    osd.23
-15       12.00000        - 13.1TiB 5.92TiB 7.18TiB 45.20 0.93   -     rack
rack3
 -3       12.00000        - 13.1TiB 5.92TiB 7.18TiB 45.20 0.93   -
node cpn03
 24  nvme  1.00000  1.00000 1.09TiB  500GiB  617GiB 44.77 0.92 145
    osd.24
 25  nvme  1.00000  1.00000 1.09TiB  462GiB  655GiB 41.37 0.85 133
    osd.25
 26  nvme  1.00000  1.00000 1.09TiB  446GiB  672GiB 39.88 0.82 129
    osd.26
 27  nvme  1.00000  1.00000 1.09TiB  565GiB  553GiB 50.54 1.04 162
    osd.27
 28  nvme  1.00000  1.00000 1.09TiB  513GiB  605GiB 45.89 0.94 148
    osd.28
 29  nvme  1.00000  1.00000 1.09TiB  554GiB  564GiB 49.55 1.01 160
    osd.29
 30  nvme  1.00000  1.00000 1.09TiB  527GiB  591GiB 47.12 0.97 152
    osd.30
 31  nvme  1.00000  1.00000 1.09TiB  484GiB  634GiB 43.31 0.89 140
    osd.31
 32  nvme  1.00000  1.00000 1.09TiB  505GiB  612GiB 45.21 0.93 146
    osd.32
 33  nvme  1.00000  1.00000 1.09TiB  551GiB  567GiB 49.28 1.01 159
    osd.33
 34  nvme  1.00000  1.00000 1.09TiB  498GiB  620GiB 44.52 0.91 143
    osd.34
 35  nvme  1.00000  1.00000 1.09TiB  457GiB  660GiB 40.93 0.84 132
    osd.35
-16       12.00000        - 13.1TiB 6.00TiB 7.10TiB 45.77 0.94   -     rack
rack4
 -4       12.00000        - 13.1TiB 6.00TiB 7.10TiB 45.77 0.94   -
node cpn04
 36  nvme  1.00000  1.00000 1.09TiB  540GiB  578GiB 48.29 0.99 156
    osd.36
 37  nvme  1.00000  1.00000 1.09TiB  517GiB  601GiB 46.25 0.95 149
    osd.37
 38  nvme  1.00000  1.00000 1.09TiB  519GiB  599GiB 46.42 0.95 149
    osd.38
 39  nvme  1.00000  1.00000 1.09TiB  457GiB  661GiB 40.85 0.84 132
    osd.39
 40  nvme  1.00000  1.00000 1.09TiB  527GiB  590GiB 47.17 0.97 152
    osd.40
 41  nvme  1.00000  1.00000 1.09TiB  537GiB  581GiB 48.01 0.98 155
    osd.41
 42  nvme  1.00000  1.00000 1.09TiB  522GiB  596GiB 46.68 0.96 151
    osd.42
 43  nvme  1.00000  1.00000 1.09TiB  459GiB  658GiB 41.09 0.84 132
    osd.43
 44  nvme  1.00000  1.00000 1.09TiB  550GiB  568GiB 49.17 1.01 158
    osd.44
 45  nvme  1.00000  1.00000 1.09TiB  570GiB  548GiB 51.00 1.04 164
    osd.45
 46  nvme  1.00000  1.00000 1.09TiB  451GiB  667GiB 40.32 0.83 130
    osd.46
 47  nvme  1.00000  1.00000 1.09TiB  492GiB  626GiB 44.03 0.90 142
    osd.47
-20       12.00000        - 10.9TiB 5.77TiB 5.15TiB 52.84 1.08   -     rack
rack5
-19       12.00000        - 10.9TiB 5.77TiB 5.15TiB 52.84 1.08   -
node cpn05
 48  nvme  1.00000  1.00000  931GiB  551GiB  380GiB 59.19 1.21 159
    osd.48
 49  nvme  1.00000  1.00000  931GiB  469GiB  462GiB 50.39 1.03 136
    osd.49
 50  nvme  1.00000  1.00000  931GiB  548GiB  384GiB 58.83 1.20 158
    osd.50
 51  nvme  1.00000  1.00000  931GiB  380GiB  551GiB 40.83 0.84 109
    osd.51
 52  nvme  1.00000  1.00000  931GiB  487GiB  445GiB 52.24 1.07 141
    osd.52
 53  nvme  1.00000  1.00000  931GiB  503GiB  429GiB 53.98 1.11 146
    osd.53
 54  nvme  1.00000  1.00000  931GiB  479GiB  452GiB 51.47 1.05 139
    osd.54
 55  nvme  1.00000  1.00000  931GiB  522GiB  410GiB 55.99 1.15 150
    osd.55
 56  nvme  1.00000  1.00000  931GiB  571GiB  360GiB 61.31 1.26 165
    osd.56
 57  nvme  1.00000  1.00000  931GiB  405GiB  527GiB 43.46 0.89 117
    osd.57
 58  nvme  1.00000  1.00000  931GiB  456GiB  475GiB 48.97 1.00 132
    osd.58
 59  nvme  1.00000  1.00000  931GiB  535GiB  396GiB 57.45 1.18 154
    osd.59
-23       12.00000        - 10.9TiB 5.98TiB 4.93TiB 54.79 1.12   -     rack
rack6
-24       12.00000        - 10.9TiB 5.98TiB 4.93TiB 54.79 1.12   -
node cpn06
 60  nvme  1.00000  1.00000  931GiB  499GiB  432GiB 53.61 1.10 144
    osd.60
 61  nvme  1.00000  1.00000  931GiB  447GiB  485GiB 47.94 0.98 129
    osd.61
 62  nvme  1.00000  1.00000  931GiB  505GiB  426GiB 54.24 1.11 146
    osd.62
 63  nvme  1.00000  1.00000  931GiB  563GiB  368GiB 60.47 1.24 162
    osd.63
 64  nvme  1.00000  1.00000  931GiB  605GiB  326GiB 65.01 1.33 175
    osd.64
 65  nvme  1.00000  1.00000  931GiB  476GiB  455GiB 51.15 1.05 138
    osd.65
 66  nvme  1.00000  1.00000  931GiB  461GiB  471GiB 49.44 1.01 133
    osd.66
 67  nvme  1.00000  1.00000  931GiB  483GiB  448GiB 51.86 1.06 140
    osd.67
 68  nvme  1.00000  1.00000  931GiB  521GiB  411GiB 55.92 1.15 151
    osd.68
 69  nvme  1.00000  1.00000  931GiB  481GiB  450GiB 51.69 1.06 139
    osd.69
 70  nvme  1.00000  1.00000  931GiB  506GiB  426GiB 54.29 1.11 146
    osd.70
 71  nvme  1.00000  1.00000  931GiB  576GiB  355GiB 61.87 1.27 166
    osd.71
-27       12.00000        - 10.9TiB 6.06TiB 4.85TiB 55.56 1.14   -     rack
rack7
-28       12.00000        - 10.9TiB 6.06TiB 4.85TiB 55.56 1.14   -
node cpn07
 72  nvme  1.00000  1.00000  931GiB  554GiB  378GiB 59.43 1.22 160
    osd.72
 73  nvme  1.00000  1.00000  931GiB  443GiB  489GiB 47.52 0.97 128
    osd.73
 74  nvme  1.00000  1.00000  931GiB  450GiB  481GiB 48.33 0.99 130
    osd.74
 75  nvme  1.00000  1.00000  931GiB  529GiB  403GiB 56.77 1.16 153
    osd.75
 76  nvme  1.00000  1.00000  931GiB  489GiB  443GiB 52.48 1.08 141
    osd.76
 77  nvme  1.00000  1.00000  931GiB  571GiB  360GiB 61.32 1.26 165
    osd.77
 78  nvme  1.00000  1.00000  931GiB  474GiB  458GiB 50.87 1.04 137
    osd.78
 79  nvme  1.00000  1.00000  931GiB  536GiB  395GiB 57.58 1.18 155
    osd.79
 80  nvme  1.00000  1.00000  931GiB  492GiB  440GiB 52.79 1.08 142
    osd.80
 81  nvme  1.00000  1.00000  931GiB  511GiB  421GiB 54.84 1.12 148
    osd.81
 82  nvme  1.00000  1.00000  931GiB  563GiB  368GiB 60.48 1.24 162
    osd.82
 83  nvme  1.00000  1.00000  931GiB  599GiB  332GiB 64.32 1.32 173
    osd.83
                      TOTAL 85.1TiB 41.6TiB 43.6TiB 48.82
MIN/MAX VAR: 0.81/1.33  STDDEV: 6.30

Is there any documentation or scripts regarding manually distribution
for rebalancing
disk with minimal I/O impact on clients io?



On Tue, Feb 4, 2020 at 10:25 AM EDH - Manuel Rios <mriosfer@xxxxxxxxxxxxxxxx>
wrote:

> With “ceph osd df tree” will be clear but right now I can see that some
> %USE osd between 44% and 65%.
>
>
>
> Ceph osd df tree give also the balance at host level.
>
>
>
> Do you have balancer enabled ?No “perfect” distribution cause that you
> cant use the full space.
>
>
>
> In our case we gain space manually rebalancing disk, that cause some
> objects moves to other osd but you can so fast space available.
>
>
>
> Regards
>
>
>
>
>
> *De:* German Anders <yodasbunker@xxxxxxxxx>
> *Enviado el:* martes, 4 de febrero de 2020 14:20
> *Para:* EDH - Manuel Rios <mriosfer@xxxxxxxxxxxxxxxx>
> *CC:* ceph-users@xxxxxxxx
> *Asunto:* Re:  Doubt about AVAIL space on df
>
>
>
> Hi Manuel,
>
>
>
> Sure thing:
>
>
>
> # ceph osd df
> ID CLASS WEIGHT  REWEIGHT SIZE    USE     AVAIL   %USE  VAR  PGS
>  0  nvme 1.00000  1.00000 1.09TiB  496GiB  622GiB 44.35 0.91 143
>  1  nvme 1.00000  1.00000 1.09TiB  488GiB  630GiB 43.63 0.89 141
>  2  nvme 1.00000  1.00000 1.09TiB  537GiB  581GiB 48.05 0.99 155
>  3  nvme 1.00000  1.00000 1.09TiB  473GiB  644GiB 42.36 0.87 137
>  4  nvme 1.00000  1.00000 1.09TiB  531GiB  587GiB 47.52 0.97 153
>  5  nvme 1.00000  1.00000 1.09TiB  476GiB  642GiB 42.55 0.87 137
>  6  nvme 1.00000  1.00000 1.09TiB  467GiB  651GiB 41.77 0.86 135
>  7  nvme 1.00000  1.00000 1.09TiB  543GiB  574GiB 48.61 1.00 157
>  8  nvme 1.00000  1.00000 1.09TiB  481GiB  636GiB 43.08 0.88 139
>  9  nvme 1.00000  1.00000 1.09TiB  457GiB  660GiB 40.92 0.84 133
> 10  nvme 1.00000  1.00000 1.09TiB  513GiB  604GiB 45.92 0.94 148
> 11  nvme 1.00000  1.00000 1.09TiB  484GiB  634GiB 43.29 0.89 140
> 12  nvme 1.00000  1.00000 1.09TiB  498GiB  620GiB 44.57 0.91 144
> 13  nvme 1.00000  1.00000 1.09TiB  560GiB  557GiB 50.13 1.03 162
> 14  nvme 1.00000  1.00000 1.09TiB  576GiB  542GiB 51.55 1.06 167
> 15  nvme 1.00000  1.00000 1.09TiB  545GiB  572GiB 48.78 1.00 158
> 16  nvme 1.00000  1.00000 1.09TiB  537GiB  581GiB 48.02 0.98 155
> 17  nvme 1.00000  1.00000 1.09TiB  507GiB  611GiB 45.36 0.93 147
> 18  nvme 1.00000  1.00000 1.09TiB  490GiB  628GiB 43.86 0.90 142
> 19  nvme 1.00000  1.00000 1.09TiB  533GiB  584GiB 47.72 0.98 155
> 20  nvme 1.00000  1.00000 1.09TiB  467GiB  651GiB 41.75 0.86 134
> 21  nvme 1.00000  1.00000 1.09TiB  447GiB  671GiB 39.97 0.82 129
> 22  nvme 1.00099  1.00000 1.09TiB  561GiB  557GiB 50.16 1.03 162
> 23  nvme 1.00000  1.00000 1.09TiB  441GiB  677GiB 39.46 0.81 127
> 24  nvme 1.00000  1.00000 1.09TiB  500GiB  618GiB 44.72 0.92 145
> 25  nvme 1.00000  1.00000 1.09TiB  462GiB  656GiB 41.30 0.85 133
> 26  nvme 1.00000  1.00000 1.09TiB  445GiB  672GiB 39.85 0.82 129
> 27  nvme 1.00000  1.00000 1.09TiB  564GiB  554GiB 50.45 1.03 162
> 28  nvme 1.00000  1.00000 1.09TiB  512GiB  605GiB 45.84 0.94 148
> 29  nvme 1.00000  1.00000 1.09TiB  553GiB  565GiB 49.49 1.01 160
> 30  nvme 1.00000  1.00000 1.09TiB  526GiB  592GiB 47.07 0.97 152
> 31  nvme 1.00000  1.00000 1.09TiB  484GiB  633GiB 43.34 0.89 140
> 32  nvme 1.00000  1.00000 1.09TiB  504GiB  613GiB 45.13 0.93 146
> 33  nvme 1.00000  1.00000 1.09TiB  550GiB  567GiB 49.23 1.01 159
> 34  nvme 1.00000  1.00000 1.09TiB  497GiB  620GiB 44.51 0.91 143
> 35  nvme 1.00000  1.00000 1.09TiB  457GiB  661GiB 40.88 0.84 132
> 36  nvme 1.00000  1.00000 1.09TiB  539GiB  578GiB 48.25 0.99 156
> 37  nvme 1.00000  1.00000 1.09TiB  516GiB  601GiB 46.19 0.95 149
> 38  nvme 1.00000  1.00000 1.09TiB  518GiB  600GiB 46.35 0.95 149
> 39  nvme 1.00000  1.00000 1.09TiB  456GiB  662GiB 40.81 0.84 132
> 40  nvme 1.00000  1.00000 1.09TiB  527GiB  591GiB 47.13 0.97 152
> 41  nvme 1.00000  1.00000 1.09TiB  536GiB  581GiB 47.98 0.98 155
> 42  nvme 1.00000  1.00000 1.09TiB  521GiB  597GiB 46.62 0.96 151
> 43  nvme 1.00000  1.00000 1.09TiB  459GiB  659GiB 41.05 0.84 132
> 44  nvme 1.00000  1.00000 1.09TiB  549GiB  569GiB 49.12 1.01 158
> 45  nvme 1.00000  1.00000 1.09TiB  569GiB  548GiB 50.95 1.04 164
> 46  nvme 1.00000  1.00000 1.09TiB  450GiB  668GiB 40.28 0.83 130
> 47  nvme 1.00000  1.00000 1.09TiB  491GiB  626GiB 43.97 0.90 142
> 48  nvme 1.00000  1.00000  931GiB  551GiB  381GiB 59.13 1.21 159
> 49  nvme 1.00000  1.00000  931GiB  469GiB  463GiB 50.34 1.03 136
> 50  nvme 1.00000  1.00000  931GiB  548GiB  384GiB 58.78 1.21 158
> 51  nvme 1.00000  1.00000  931GiB  380GiB  552GiB 40.79 0.84 109
> 52  nvme 1.00000  1.00000  931GiB  486GiB  445GiB 52.20 1.07 141
> 53  nvme 1.00000  1.00000  931GiB  502GiB  429GiB 53.93 1.11 146
> 54  nvme 1.00000  1.00000  931GiB  479GiB  452GiB 51.42 1.05 139
> 55  nvme 1.00000  1.00000  931GiB  521GiB  410GiB 55.93 1.15 150
> 56  nvme 1.00000  1.00000  931GiB  570GiB  361GiB 61.25 1.26 165
> 57  nvme 1.00000  1.00000  931GiB  404GiB  527GiB 43.43 0.89 117
> 58  nvme 1.00000  1.00000  931GiB  455GiB  476GiB 48.89 1.00 132
> 59  nvme 1.00000  1.00000  931GiB  535GiB  397GiB 57.39 1.18 154
> 60  nvme 1.00000  1.00000  931GiB  499GiB  433GiB 53.56 1.10 144
> 61  nvme 1.00000  1.00000  931GiB  446GiB  485GiB 47.92 0.98 129
> 62  nvme 1.00000  1.00000  931GiB  505GiB  427GiB 54.18 1.11 146
> 63  nvme 1.00000  1.00000  931GiB  563GiB  369GiB 60.39 1.24 162
> 64  nvme 1.00000  1.00000  931GiB  605GiB  326GiB 64.99 1.33 175
> 65  nvme 1.00000  1.00000  931GiB  476GiB  455GiB 51.10 1.05 138
> 66  nvme 1.00000  1.00000  931GiB  460GiB  471GiB 49.38 1.01 133
> 67  nvme 1.00000  1.00000  931GiB  483GiB  449GiB 51.82 1.06 140
> 68  nvme 1.00000  1.00000  931GiB  520GiB  411GiB 55.86 1.15 151
> 69  nvme 1.00000  1.00000  931GiB  481GiB  450GiB 51.64 1.06 139
> 70  nvme 1.00000  1.00000  931GiB  505GiB  426GiB 54.24 1.11 146
> 71  nvme 1.00000  1.00000  931GiB  576GiB  356GiB 61.81 1.27 166
> 72  nvme 1.00000  1.00000  931GiB  552GiB  379GiB 59.30 1.22 160
> 73  nvme 1.00000  1.00000  931GiB  442GiB  489GiB 47.47 0.97 128
> 74  nvme 1.00000  1.00000  931GiB  450GiB  482GiB 48.28 0.99 130
> 75  nvme 1.00000  1.00000  931GiB  529GiB  403GiB 56.77 1.16 153
> 76  nvme 1.00000  1.00000  931GiB  488GiB  443GiB 52.44 1.08 141
> 77  nvme 1.00000  1.00000  931GiB  570GiB  361GiB 61.25 1.26 165
> 78  nvme 1.00000  1.00000  931GiB  473GiB  458GiB 50.79 1.04 137
> 79  nvme 1.00000  1.00000  931GiB  536GiB  396GiB 57.54 1.18 155
> 80  nvme 1.00000  1.00000  931GiB  491GiB  440GiB 52.74 1.08 142
> 81  nvme 1.00000  1.00000  931GiB  510GiB  421GiB 54.78 1.12 148
> 82  nvme 1.00000  1.00000  931GiB  563GiB  369GiB 60.42 1.24 162
> 83  nvme 1.00000  1.00000  931GiB  599GiB  333GiB 64.28 1.32 173
>                     TOTAL 85.1TiB 41.5TiB 43.6TiB 48.77
> MIN/MAX VAR: 0.81/1.33  STDDEV: 6.30
>
>
>
>
>
> Thanks in advance,
>
>
>
> Best regards,
>
>
>
>
>
>
>
> On Tue, Feb 4, 2020 at 10:15 AM EDH - Manuel Rios <
> mriosfer@xxxxxxxxxxxxxxxx> wrote:
>
> Hi German,
>
> Can you post , ceph osd df tree ?
>
> Looks like your usage distribution is not perfect and that's why you got
> less space than real.
> Regards
>
>
> -----Mensaje original-----
> De: German Anders <yodasbunker@xxxxxxxxx>
> Enviado el: martes, 4 de febrero de 2020 14:00
> Para: ceph-users@xxxxxxxx
> Asunto:  Doubt about AVAIL space on df
>
> Hello Everyone,
>
> I would like to understand if this output is right:
>
> *# ceph df*
> GLOBAL:
>     SIZE        AVAIL       RAW USED     %RAW USED
>     85.1TiB     43.7TiB      41.4TiB         48.68
> POOLS:
>     NAME        ID     USED        %USED     MAX AVAIL     OBJECTS
>     volumes     13     13.8TiB     64.21       7.68TiB     3620495
>
> I only have (1) pool called 'volumes' which is using 13.8TiB (we have a
> replica of 3) so it's actually using 41,4TiB and that would be the RAW
> USED, at this point is fine, but, then it said on the GLOBAL section that
> the AVAIL space is 43.7TiB and the %RAW USED is only 48.68%.
>
> So if I use the 7.68TiB of MAX AVAIL and the pool goes up to 100% of
> usage, that would not lead to the total space of the cluster, right? I mean
> were are those 43.7TiB of AVAIL space?
>
> I'm using Luminous 12.2.12 release.
>
> Sorry if it's a silly question or if it has been answered before.
>
> Thanks in advance,
>
> Best regards,
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
> email to ceph-users-leave@xxxxxxx
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux