Re: Doubt about AVAIL space on df

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



With “ceph osd df tree” will be clear but right now I can see that some %USE osd between 44% and 65%.

Ceph osd df tree give also the balance at host level.

Do you have balancer enabled ?No “perfect” distribution cause that you cant use the full space.

In our case we gain space manually rebalancing disk, that cause some objects moves to other osd but you can so fast space available.

Regards


De: German Anders <yodasbunker@xxxxxxxxx>
Enviado el: martes, 4 de febrero de 2020 14:20
Para: EDH - Manuel Rios <mriosfer@xxxxxxxxxxxxxxxx>
CC: ceph-users@xxxxxxxx
Asunto: Re:  Doubt about AVAIL space on df

Hi Manuel,

Sure thing:

# ceph osd df
ID CLASS WEIGHT  REWEIGHT SIZE    USE     AVAIL   %USE  VAR  PGS
 0  nvme 1.00000  1.00000 1.09TiB  496GiB  622GiB 44.35 0.91 143
 1  nvme 1.00000  1.00000 1.09TiB  488GiB  630GiB 43.63 0.89 141
 2  nvme 1.00000  1.00000 1.09TiB  537GiB  581GiB 48.05 0.99 155
 3  nvme 1.00000  1.00000 1.09TiB  473GiB  644GiB 42.36 0.87 137
 4  nvme 1.00000  1.00000 1.09TiB  531GiB  587GiB 47.52 0.97 153
 5  nvme 1.00000  1.00000 1.09TiB  476GiB  642GiB 42.55 0.87 137
 6  nvme 1.00000  1.00000 1.09TiB  467GiB  651GiB 41.77 0.86 135
 7  nvme 1.00000  1.00000 1.09TiB  543GiB  574GiB 48.61 1.00 157
 8  nvme 1.00000  1.00000 1.09TiB  481GiB  636GiB 43.08 0.88 139
 9  nvme 1.00000  1.00000 1.09TiB  457GiB  660GiB 40.92 0.84 133
10  nvme 1.00000  1.00000 1.09TiB  513GiB  604GiB 45.92 0.94 148
11  nvme 1.00000  1.00000 1.09TiB  484GiB  634GiB 43.29 0.89 140
12  nvme 1.00000  1.00000 1.09TiB  498GiB  620GiB 44.57 0.91 144
13  nvme 1.00000  1.00000 1.09TiB  560GiB  557GiB 50.13 1.03 162
14  nvme 1.00000  1.00000 1.09TiB  576GiB  542GiB 51.55 1.06 167
15  nvme 1.00000  1.00000 1.09TiB  545GiB  572GiB 48.78 1.00 158
16  nvme 1.00000  1.00000 1.09TiB  537GiB  581GiB 48.02 0.98 155
17  nvme 1.00000  1.00000 1.09TiB  507GiB  611GiB 45.36 0.93 147
18  nvme 1.00000  1.00000 1.09TiB  490GiB  628GiB 43.86 0.90 142
19  nvme 1.00000  1.00000 1.09TiB  533GiB  584GiB 47.72 0.98 155
20  nvme 1.00000  1.00000 1.09TiB  467GiB  651GiB 41.75 0.86 134
21  nvme 1.00000  1.00000 1.09TiB  447GiB  671GiB 39.97 0.82 129
22  nvme 1.00099  1.00000 1.09TiB  561GiB  557GiB 50.16 1.03 162
23  nvme 1.00000  1.00000 1.09TiB  441GiB  677GiB 39.46 0.81 127
24  nvme 1.00000  1.00000 1.09TiB  500GiB  618GiB 44.72 0.92 145
25  nvme 1.00000  1.00000 1.09TiB  462GiB  656GiB 41.30 0.85 133
26  nvme 1.00000  1.00000 1.09TiB  445GiB  672GiB 39.85 0.82 129
27  nvme 1.00000  1.00000 1.09TiB  564GiB  554GiB 50.45 1.03 162
28  nvme 1.00000  1.00000 1.09TiB  512GiB  605GiB 45.84 0.94 148
29  nvme 1.00000  1.00000 1.09TiB  553GiB  565GiB 49.49 1.01 160
30  nvme 1.00000  1.00000 1.09TiB  526GiB  592GiB 47.07 0.97 152
31  nvme 1.00000  1.00000 1.09TiB  484GiB  633GiB 43.34 0.89 140
32  nvme 1.00000  1.00000 1.09TiB  504GiB  613GiB 45.13 0.93 146
33  nvme 1.00000  1.00000 1.09TiB  550GiB  567GiB 49.23 1.01 159
34  nvme 1.00000  1.00000 1.09TiB  497GiB  620GiB 44.51 0.91 143
35  nvme 1.00000  1.00000 1.09TiB  457GiB  661GiB 40.88 0.84 132
36  nvme 1.00000  1.00000 1.09TiB  539GiB  578GiB 48.25 0.99 156
37  nvme 1.00000  1.00000 1.09TiB  516GiB  601GiB 46.19 0.95 149
38  nvme 1.00000  1.00000 1.09TiB  518GiB  600GiB 46.35 0.95 149
39  nvme 1.00000  1.00000 1.09TiB  456GiB  662GiB 40.81 0.84 132
40  nvme 1.00000  1.00000 1.09TiB  527GiB  591GiB 47.13 0.97 152
41  nvme 1.00000  1.00000 1.09TiB  536GiB  581GiB 47.98 0.98 155
42  nvme 1.00000  1.00000 1.09TiB  521GiB  597GiB 46.62 0.96 151
43  nvme 1.00000  1.00000 1.09TiB  459GiB  659GiB 41.05 0.84 132
44  nvme 1.00000  1.00000 1.09TiB  549GiB  569GiB 49.12 1.01 158
45  nvme 1.00000  1.00000 1.09TiB  569GiB  548GiB 50.95 1.04 164
46  nvme 1.00000  1.00000 1.09TiB  450GiB  668GiB 40.28 0.83 130
47  nvme 1.00000  1.00000 1.09TiB  491GiB  626GiB 43.97 0.90 142
48  nvme 1.00000  1.00000  931GiB  551GiB  381GiB 59.13 1.21 159
49  nvme 1.00000  1.00000  931GiB  469GiB  463GiB 50.34 1.03 136
50  nvme 1.00000  1.00000  931GiB  548GiB  384GiB 58.78 1.21 158
51  nvme 1.00000  1.00000  931GiB  380GiB  552GiB 40.79 0.84 109
52  nvme 1.00000  1.00000  931GiB  486GiB  445GiB 52.20 1.07 141
53  nvme 1.00000  1.00000  931GiB  502GiB  429GiB 53.93 1.11 146
54  nvme 1.00000  1.00000  931GiB  479GiB  452GiB 51.42 1.05 139
55  nvme 1.00000  1.00000  931GiB  521GiB  410GiB 55.93 1.15 150
56  nvme 1.00000  1.00000  931GiB  570GiB  361GiB 61.25 1.26 165
57  nvme 1.00000  1.00000  931GiB  404GiB  527GiB 43.43 0.89 117
58  nvme 1.00000  1.00000  931GiB  455GiB  476GiB 48.89 1.00 132
59  nvme 1.00000  1.00000  931GiB  535GiB  397GiB 57.39 1.18 154
60  nvme 1.00000  1.00000  931GiB  499GiB  433GiB 53.56 1.10 144
61  nvme 1.00000  1.00000  931GiB  446GiB  485GiB 47.92 0.98 129
62  nvme 1.00000  1.00000  931GiB  505GiB  427GiB 54.18 1.11 146
63  nvme 1.00000  1.00000  931GiB  563GiB  369GiB 60.39 1.24 162
64  nvme 1.00000  1.00000  931GiB  605GiB  326GiB 64.99 1.33 175
65  nvme 1.00000  1.00000  931GiB  476GiB  455GiB 51.10 1.05 138
66  nvme 1.00000  1.00000  931GiB  460GiB  471GiB 49.38 1.01 133
67  nvme 1.00000  1.00000  931GiB  483GiB  449GiB 51.82 1.06 140
68  nvme 1.00000  1.00000  931GiB  520GiB  411GiB 55.86 1.15 151
69  nvme 1.00000  1.00000  931GiB  481GiB  450GiB 51.64 1.06 139
70  nvme 1.00000  1.00000  931GiB  505GiB  426GiB 54.24 1.11 146
71  nvme 1.00000  1.00000  931GiB  576GiB  356GiB 61.81 1.27 166
72  nvme 1.00000  1.00000  931GiB  552GiB  379GiB 59.30 1.22 160
73  nvme 1.00000  1.00000  931GiB  442GiB  489GiB 47.47 0.97 128
74  nvme 1.00000  1.00000  931GiB  450GiB  482GiB 48.28 0.99 130
75  nvme 1.00000  1.00000  931GiB  529GiB  403GiB 56.77 1.16 153
76  nvme 1.00000  1.00000  931GiB  488GiB  443GiB 52.44 1.08 141
77  nvme 1.00000  1.00000  931GiB  570GiB  361GiB 61.25 1.26 165
78  nvme 1.00000  1.00000  931GiB  473GiB  458GiB 50.79 1.04 137
79  nvme 1.00000  1.00000  931GiB  536GiB  396GiB 57.54 1.18 155
80  nvme 1.00000  1.00000  931GiB  491GiB  440GiB 52.74 1.08 142
81  nvme 1.00000  1.00000  931GiB  510GiB  421GiB 54.78 1.12 148
82  nvme 1.00000  1.00000  931GiB  563GiB  369GiB 60.42 1.24 162
83  nvme 1.00000  1.00000  931GiB  599GiB  333GiB 64.28 1.32 173
                    TOTAL 85.1TiB 41.5TiB 43.6TiB 48.77
MIN/MAX VAR: 0.81/1.33  STDDEV: 6.30


Thanks in advance,

Best regards,



On Tue, Feb 4, 2020 at 10:15 AM EDH - Manuel Rios <mriosfer@xxxxxxxxxxxxxxxx<mailto:mriosfer@xxxxxxxxxxxxxxxx>> wrote:
Hi German,

Can you post , ceph osd df tree ?

Looks like your usage distribution is not perfect and that's why you got less space than real.
Regards


-----Mensaje original-----
De: German Anders <yodasbunker@xxxxxxxxx<mailto:yodasbunker@xxxxxxxxx>>
Enviado el: martes, 4 de febrero de 2020 14:00
Para: ceph-users@xxxxxxxx<mailto:ceph-users@xxxxxxxx>
Asunto:  Doubt about AVAIL space on df

Hello Everyone,

I would like to understand if this output is right:

*# ceph df*
GLOBAL:
    SIZE        AVAIL       RAW USED     %RAW USED
    85.1TiB     43.7TiB      41.4TiB         48.68
POOLS:
    NAME        ID     USED        %USED     MAX AVAIL     OBJECTS
    volumes     13     13.8TiB     64.21       7.68TiB     3620495

I only have (1) pool called 'volumes' which is using 13.8TiB (we have a replica of 3) so it's actually using 41,4TiB and that would be the RAW USED, at this point is fine, but, then it said on the GLOBAL section that the AVAIL space is 43.7TiB and the %RAW USED is only 48.68%.

So if I use the 7.68TiB of MAX AVAIL and the pool goes up to 100% of usage, that would not lead to the total space of the cluster, right? I mean were are those 43.7TiB of AVAIL space?

I'm using Luminous 12.2.12 release.

Sorry if it's a silly question or if it has been answered before.

Thanks in advance,

Best regards,
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx> To unsubscribe send an email to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux