Re: Need easy way to calculate Ceph cluster space for SolarWinds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It seems to be relatively close to that +/- 1.00 range.

ubuntu@juju-5dcfd8-3-lxd-2:~$ sudo ceph osd df
ID  CLASS  WEIGHT    REWEIGHT  SIZE     RAW USE  DATA     OMAP     META     AVAIL    %USE   VAR   PGS  STATUS
 1    ssd  18.19040   1.00000   18 TiB   10 TiB   10 TiB   11 GiB   38 GiB  8.1 TiB  55.59  0.92  174      up
 4    ssd  18.19040   1.00000   18 TiB   11 TiB   11 TiB   20 GiB   34 GiB  7.3 TiB  59.90  0.99  175      up
 9    ssd  18.19040   1.00000   18 TiB  9.7 TiB  9.6 TiB   23 GiB   35 GiB  8.5 TiB  53.11  0.88  185      up
13    ssd  18.19040   1.00000   18 TiB   13 TiB   13 TiB   19 GiB   48 GiB  4.8 TiB  73.87  1.22  199      up
17    ssd  18.19040   1.00000   18 TiB   12 TiB   12 TiB   14 GiB   55 GiB  5.9 TiB  67.49  1.12  185      up
21    ssd  18.19040   1.00000   18 TiB   10 TiB   10 TiB   24 GiB   38 GiB  7.8 TiB  57.27  0.95  179      up
25    ssd  18.19040   1.00000   18 TiB   12 TiB   12 TiB   20 GiB   57 GiB  6.1 TiB  66.70  1.10  192      up
29    ssd  18.19040   1.00000   18 TiB  9.2 TiB  9.2 TiB   15 GiB   34 GiB  9.0 TiB  50.61  0.84  170      up
33    ssd  18.19040   1.00000   18 TiB  9.2 TiB  9.1 TiB   13 GiB   36 GiB  9.0 TiB  50.56  0.84  180      up
39    ssd  18.19040   1.00000   18 TiB   12 TiB   12 TiB  4.3 GiB   45 GiB  6.0 TiB  66.84  1.11  188      up
44    ssd  18.19040   1.00000   18 TiB   13 TiB   13 TiB   10 GiB   56 GiB  5.3 TiB  70.59  1.17  187      up
46    ssd  18.19040   1.00000   18 TiB   10 TiB   10 TiB   14 GiB   44 GiB  7.8 TiB  57.24  0.95  174      up
 0    ssd  18.19040   1.00000   18 TiB   11 TiB   11 TiB  5.0 GiB   38 GiB  6.9 TiB  62.13  1.03  172      up
 5    ssd  18.19040   1.00000   18 TiB   11 TiB   11 TiB   11 GiB   44 GiB  7.3 TiB  59.69  0.99  177      up
10    ssd  18.19040   1.00000   18 TiB   12 TiB   11 TiB   18 GiB   47 GiB  6.7 TiB  63.34  1.05  190      up
14    ssd  18.19040   1.00000   18 TiB   12 TiB   12 TiB  5.1 GiB   48 GiB  6.3 TiB  65.51  1.08  189      up
18    ssd  18.19040   1.00000   18 TiB  9.7 TiB  9.6 TiB   13 GiB   33 GiB  8.5 TiB  53.14  0.88  175      up
22    ssd  18.19040   1.00000   18 TiB   11 TiB   11 TiB   20 GiB   42 GiB  7.3 TiB  59.61  0.99  183      up
26    ssd  18.19040   1.00000   18 TiB  9.8 TiB  9.8 TiB  9.9 GiB   31 GiB  8.4 TiB  53.88  0.89  186      up
30    ssd  18.19040   1.00000   18 TiB   12 TiB   12 TiB  4.5 GiB   56 GiB  6.4 TiB  64.65  1.07  179      up
34    ssd  18.19040   1.00000   18 TiB   13 TiB   13 TiB   18 GiB   49 GiB  4.9 TiB  73.17  1.21  192      up
38    ssd  18.19040   1.00000   18 TiB   12 TiB   12 TiB   16 GiB   51 GiB  6.4 TiB  64.67  1.07  186      up
40    ssd  18.19040   1.00000   18 TiB   13 TiB   12 TiB   23 GiB   52 GiB  5.7 TiB  68.91  1.14  184      up
42    ssd  18.19040   1.00000   18 TiB  9.7 TiB  9.7 TiB   14 GiB   26 GiB  8.5 TiB  53.35  0.88  171      up
 3    ssd  18.19040   1.00000   18 TiB  9.9 TiB  9.8 TiB   15 GiB   36 GiB  8.3 TiB  54.42  0.90  184      up
 7    ssd  18.19040   1.00000   18 TiB   10 TiB   10 TiB  5.4 GiB   36 GiB  7.9 TiB  56.73  0.94  184      up
11    ssd  18.19040   1.00000   18 TiB   12 TiB   12 TiB  9.6 GiB   54 GiB  6.6 TiB  63.76  1.06  188      up
15    ssd  18.19040   1.00000   18 TiB   11 TiB   11 TiB  7.2 GiB   47 GiB  7.5 TiB  58.51  0.97  192      up
19    ssd  18.19040   1.00000   18 TiB   11 TiB   11 TiB   19 GiB   41 GiB  7.3 TiB  59.87  0.99  181      up
23    ssd  18.19040   1.00000   18 TiB   13 TiB   12 TiB   20 GiB   54 GiB  5.7 TiB  68.89  1.14  181      up
27    ssd  18.19040   1.00000   18 TiB  9.0 TiB  9.0 TiB   15 GiB   31 GiB  9.2 TiB  49.63  0.82  173      up
32    ssd  18.19040   1.00000   18 TiB   11 TiB   11 TiB   20 GiB   26 GiB  7.3 TiB  59.92  0.99  183      up
36    ssd  18.19040   1.00000   18 TiB  8.3 TiB  8.3 TiB   11 GiB   17 GiB  9.8 TiB  45.86  0.76  177      up
41    ssd  18.19040   1.00000   18 TiB   13 TiB   13 TiB   25 GiB   49 GiB  5.2 TiB  71.30  1.18  191      up
45    ssd  18.19040   1.00000   18 TiB   10 TiB   10 TiB   13 GiB   42 GiB  7.9 TiB  56.37  0.93  163      up
47    ssd  18.19040   1.00000   18 TiB   11 TiB   11 TiB   13 GiB   38 GiB  7.2 TiB  60.45  1.00  167      up
 2    ssd  18.19040   1.00000   18 TiB   11 TiB   11 TiB  5.2 GiB   43 GiB  7.2 TiB  60.42  1.00  179      up
 6    ssd  18.19040   1.00000   18 TiB   11 TiB   11 TiB   28 GiB   47 GiB  7.1 TiB  60.99  1.01  184      up
 8    ssd  18.19040   1.00000   18 TiB   13 TiB   13 TiB   20 GiB   59 GiB  5.5 TiB  69.95  1.16  184      up
12    ssd  18.19040   1.00000   18 TiB   11 TiB   11 TiB  6.2 GiB   39 GiB  7.4 TiB  59.22  0.98  180      up
16    ssd  18.19040   1.00000   18 TiB  9.8 TiB  9.7 TiB   14 GiB   37 GiB  8.4 TiB  53.63  0.89  187      up
20    ssd  18.19040   1.00000   18 TiB  9.7 TiB  9.6 TiB   21 GiB   33 GiB  8.5 TiB  53.14  0.88  181      up
24    ssd  18.19040   1.00000   18 TiB   12 TiB   12 TiB   10 GiB   46 GiB  6.3 TiB  65.24  1.08  180      up
28    ssd  18.19040   1.00000   18 TiB   10 TiB   10 TiB   13 GiB   45 GiB  8.1 TiB  55.41  0.92  192      up
31    ssd  18.19040   1.00000   18 TiB   12 TiB   12 TiB   22 GiB   48 GiB  6.2 TiB  65.92  1.09  186      up
35    ssd  18.19040   1.00000   18 TiB   10 TiB   10 TiB   15 GiB   33 GiB  8.0 TiB  56.11  0.93  175      up
37    ssd  18.19040   1.00000   18 TiB   13 TiB   13 TiB   13 GiB   53 GiB  5.0 TiB  72.78  1.21  179      up
43    ssd  18.19040   1.00000   18 TiB  8.9 TiB  8.8 TiB   17 GiB   23 GiB  9.3 TiB  48.71  0.81  178      up
                        TOTAL  873 TiB  527 TiB  525 TiB  704 GiB  2.0 TiB  346 TiB  60.40
MIN/MAX VAR: 0.76/1.22  STDDEV: 6.98

-----Original Message-----
From: Anthony D'Atri <anthony.datri@xxxxxxxxx>
Sent: Wednesday, March 20, 2024 5:09 PM
To: Michael Worsham <mworsham@xxxxxxxxxxxxxxxxxx>
Cc: ceph-users@xxxxxxx
Subject: Re:  Need easy way to calculate Ceph cluster space for SolarWinds

This is an external email. Please take care when clicking links or opening attachments. When in doubt, check with the Help Desk or Security.


Looks like you have one device class and the same replication on all pools, which makes that simpler.

Your MAX AVAIL figures are lower than I would expect if you're using size=3, so I'd check if you have the balancer enabled, if it's working properly.

Run

ceph osd df

and look at the VAR column,

[rook@rook-ceph-tools-5ff8d58445-p9npl /]$ ceph osd df | head
ID   CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE   DATA      OMAP     META     AVAIL    %USE   VAR   PGS  STATUS

Ideally the numbers should all be close to 1.00 + / -

> On Mar 20, 2024, at 16:55, Michael Worsham <mworsham@xxxxxxxxxxxxxxxxxx> wrote:
>
> I had a request from the upper management wanting to use SolarWinds to be able to extract what I am looking at and have SolarWinds track it in terms of total available space, remaining space of the overall cluster, and I guess would be the current RGW pools/buckets we have and their allocated sizes and space remaining in it as well. I am sort of in the dark when it comes to trying to break things down to make it readable/understandable for those that are non-technical.
>
> I was told that when it comes to pools and buckets, you sort of have to see it this way:
> - Bucket is like a folder
> - Pool is like a hard drive.
> - You can create many folders in a hard drive and you can add quota to each folder.
> - But if you want to know the remaining space, you need to check the hard drive.
>
> I did the "ceph df" command on the ceph monitor and we have something that looks like this:
>
>>> sudo ceph df
> --- RAW STORAGE ---
> CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
> ssd    873 TiB  346 TiB  527 TiB   527 TiB      60.40
> TOTAL  873 TiB  346 TiB  527 TiB   527 TiB      60.40
>
> --- POOLS ---
> POOL                             ID   PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
> .mgr                              1     1  449 KiB        2  1.3 MiB      0     61 TiB
> default.rgw.buckets.data          2  2048  123 TiB   41.86M  371 TiB  66.76     61 TiB
> default.rgw.control               3     2      0 B        8      0 B      0     61 TiB
> default.rgw.data.root             4     2      0 B        0      0 B      0     61 TiB
> default.rgw.gc                    5     2      0 B        0      0 B      0     61 TiB
> default.rgw.log                   6     2   41 KiB      209  732 KiB      0     61 TiB
> default.rgw.intent-log            7     2      0 B        0      0 B      0     61 TiB
> default.rgw.meta                  8     2   20 KiB       96  972 KiB      0     61 TiB
> default.rgw.otp                   9     2      0 B        0      0 B      0     61 TiB
> default.rgw.usage                10     2      0 B        0      0 B      0     61 TiB
> default.rgw.users.keys           11     2      0 B        0      0 B      0     61 TiB
> default.rgw.users.email          12     2      0 B        0      0 B      0     61 TiB
> default.rgw.users.swift          13     2      0 B        0      0 B      0     61 TiB
> default.rgw.users.uid            14     2      0 B        0      0 B      0     61 TiB
> default.rgw.buckets.extra        15    16      0 B        0      0 B      0     61 TiB
> default.rgw.buckets.index        16    64  6.3 GiB      184   19 GiB   0.01     61 TiB
> .rgw.root                        17     2  2.3 KiB        4   48 KiB      0     61 TiB
> ceph-benchmarking                18   128  596 GiB  302.20k  1.7 TiB   0.94     61 TiB
> ceph-fs_data                     19    64  438 MiB      110  1.3 GiB      0     61 TiB
> ceph-fs_metadata                 20    16   37 MiB       32  111 MiB      0     61 TiB
> test                             21    32   21 TiB    5.61M   64 TiB  25.83     61 TiB
> DD-Test                          22    32   11 MiB       13   32 MiB      0     61 TiB
> nativesqlbackup                  24    32  539 MiB      147  1.6 GiB      0     61 TiB
> default.rgw.buckets.non-ec       25    32  1.7 MiB        0  5.0 MiB      0     61 TiB
> ceph-fs_sql_backups              26    32      0 B        0      0 B      0     61 TiB
> ceph-fs_sql_backups_metadata     27    32      0 B        0      0 B      0     61 TiB
> dd-drs-backups                   28    32      0 B        0      0 B      0     61 TiB
> default.rgw.jv-corp-pool.data    59    32   16 TiB   63.90M   49 TiB  21.12     61 TiB
> default.rgw.jv-corp-pool.index   60    32  108 GiB    1.19k  323 GiB   0.17     61 TiB
> default.rgw.jv-corp-pool.non-ec  61    32      0 B        0      0 B      0     61 TiB
> default.rgw.jv-comm-pool.data    62    32  8.1 TiB   44.20M   24 TiB  11.65     61 TiB
> default.rgw.jv-comm-pool.index   63    32   83 GiB      811  248 GiB   0.13     61 TiB
> default.rgw.jv-comm-pool.non-ec  64    32      0 B        0      0 B      0     61 TiB
> default.rgw.jv-va-pool.data      65    32  4.8 TiB   22.17M   14 TiB   7.28     61 TiB
> default.rgw.jv-va-pool.index     66    32   38 GiB      401  113 GiB   0.06     61 TiB
> default.rgw.jv-va-pool.non-ec    67    32      0 B        0      0 B      0     61 TiB
> jv-edi-pool                      68    32      0 B        0      0 B      0     61 TiB
>
> -- Michael
>
> -----Original Message-----
> From: Anthony D'Atri <anthony.datri@xxxxxxxxx>
> Sent: Wednesday, March 20, 2024 2:48 PM
> To: Michael Worsham <mworsham@xxxxxxxxxxxxxxxxxx>
> Cc: ceph-users@xxxxxxx
> Subject: Re:  Need easy way to calculate Ceph cluster
> space for SolarWinds
>
> This is an external email. Please take care when clicking links or opening attachments. When in doubt, check with the Help Desk or Security.
>
>
>> On Mar 20, 2024, at 14:42, Michael Worsham <mworsham@xxxxxxxxxxxxxxxxxx> wrote:
>>
>> Is there an easy way to poll a Ceph cluster to see how much space is
>> available
>
> `ceph df`
>
> The exporter has percentages per pool as well.
>
>
>> and how much space is available per bucket?
>
> Are you using RGW quotas?
>
>>
>> Looking for a way to use SolarWinds to monitor the entire Ceph cluster space utilization and then also be able to break down each RGW bucket to see how much space it was provisioned for and how much is available.
>
> RGW buckets do not provision space.  Optionally there may be some RGW quotas but they're a different thing than you're implying.
>
>
>>
>> -- Michael
>>
>>
>> Get Outlook for Android<https://aka.ms/AAb9ysg> This message and its
>> attachments are from Data Dimensions and are intended only for the use of the individual or entity to which it is addressed, and may contain information that is privileged, confidential, and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, or the employee or agent responsible for delivering the message to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify the sender immediately and permanently delete the original email and destroy any copies or printouts of this email as well as any attachments.
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
>> email to ceph-users-leave@xxxxxxx
>
> This message and its attachments are from Data Dimensions and are intended only for the use of the individual or entity to which it is addressed, and may contain information that is privileged, confidential, and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, or the employee or agent responsible for delivering the message to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify the sender immediately and permanently delete the original email and destroy any copies or printouts of this email as well as any attachments.
>

This message and its attachments are from Data Dimensions and are intended only for the use of the individual or entity to which it is addressed, and may contain information that is privileged, confidential, and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, or the employee or agent responsible for delivering the message to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify the sender immediately and permanently delete the original email and destroy any copies or printouts of this email as well as any attachments.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux