Re: Need easy way to calculate Ceph cluster space for SolarWinds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Looks like you have one device class and the same replication on all pools, which makes that simpler.

Your MAX AVAIL figures are lower than I would expect if you're using size=3, so I'd check if you have the balancer enabled, if it's working properly.

Run

ceph osd df

and look at the VAR column, 

[rook@rook-ceph-tools-5ff8d58445-p9npl /]$ ceph osd df | head
ID   CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE   DATA      OMAP     META     AVAIL    %USE   VAR   PGS  STATUS

Ideally the numbers should all be close to 1.00 + / -

> On Mar 20, 2024, at 16:55, Michael Worsham <mworsham@xxxxxxxxxxxxxxxxxx> wrote:
> 
> I had a request from the upper management wanting to use SolarWinds to be able to extract what I am looking at and have SolarWinds track it in terms of total available space, remaining space of the overall cluster, and I guess would be the current RGW pools/buckets we have and their allocated sizes and space remaining in it as well. I am sort of in the dark when it comes to trying to break things down to make it readable/understandable for those that are non-technical.
> 
> I was told that when it comes to pools and buckets, you sort of have to see it this way:
> - Bucket is like a folder
> - Pool is like a hard drive.
> - You can create many folders in a hard drive and you can add quota to each folder.
> - But if you want to know the remaining space, you need to check the hard drive.
> 
> I did the "ceph df" command on the ceph monitor and we have something that looks like this:
> 
>>> sudo ceph df
> --- RAW STORAGE ---
> CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
> ssd    873 TiB  346 TiB  527 TiB   527 TiB      60.40
> TOTAL  873 TiB  346 TiB  527 TiB   527 TiB      60.40
> 
> --- POOLS ---
> POOL                             ID   PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
> .mgr                              1     1  449 KiB        2  1.3 MiB      0     61 TiB
> default.rgw.buckets.data          2  2048  123 TiB   41.86M  371 TiB  66.76     61 TiB
> default.rgw.control               3     2      0 B        8      0 B      0     61 TiB
> default.rgw.data.root             4     2      0 B        0      0 B      0     61 TiB
> default.rgw.gc                    5     2      0 B        0      0 B      0     61 TiB
> default.rgw.log                   6     2   41 KiB      209  732 KiB      0     61 TiB
> default.rgw.intent-log            7     2      0 B        0      0 B      0     61 TiB
> default.rgw.meta                  8     2   20 KiB       96  972 KiB      0     61 TiB
> default.rgw.otp                   9     2      0 B        0      0 B      0     61 TiB
> default.rgw.usage                10     2      0 B        0      0 B      0     61 TiB
> default.rgw.users.keys           11     2      0 B        0      0 B      0     61 TiB
> default.rgw.users.email          12     2      0 B        0      0 B      0     61 TiB
> default.rgw.users.swift          13     2      0 B        0      0 B      0     61 TiB
> default.rgw.users.uid            14     2      0 B        0      0 B      0     61 TiB
> default.rgw.buckets.extra        15    16      0 B        0      0 B      0     61 TiB
> default.rgw.buckets.index        16    64  6.3 GiB      184   19 GiB   0.01     61 TiB
> .rgw.root                        17     2  2.3 KiB        4   48 KiB      0     61 TiB
> ceph-benchmarking                18   128  596 GiB  302.20k  1.7 TiB   0.94     61 TiB
> ceph-fs_data                     19    64  438 MiB      110  1.3 GiB      0     61 TiB
> ceph-fs_metadata                 20    16   37 MiB       32  111 MiB      0     61 TiB
> test                             21    32   21 TiB    5.61M   64 TiB  25.83     61 TiB
> DD-Test                          22    32   11 MiB       13   32 MiB      0     61 TiB
> nativesqlbackup                  24    32  539 MiB      147  1.6 GiB      0     61 TiB
> default.rgw.buckets.non-ec       25    32  1.7 MiB        0  5.0 MiB      0     61 TiB
> ceph-fs_sql_backups              26    32      0 B        0      0 B      0     61 TiB
> ceph-fs_sql_backups_metadata     27    32      0 B        0      0 B      0     61 TiB
> dd-drs-backups                   28    32      0 B        0      0 B      0     61 TiB
> default.rgw.jv-corp-pool.data    59    32   16 TiB   63.90M   49 TiB  21.12     61 TiB
> default.rgw.jv-corp-pool.index   60    32  108 GiB    1.19k  323 GiB   0.17     61 TiB
> default.rgw.jv-corp-pool.non-ec  61    32      0 B        0      0 B      0     61 TiB
> default.rgw.jv-comm-pool.data    62    32  8.1 TiB   44.20M   24 TiB  11.65     61 TiB
> default.rgw.jv-comm-pool.index   63    32   83 GiB      811  248 GiB   0.13     61 TiB
> default.rgw.jv-comm-pool.non-ec  64    32      0 B        0      0 B      0     61 TiB
> default.rgw.jv-va-pool.data      65    32  4.8 TiB   22.17M   14 TiB   7.28     61 TiB
> default.rgw.jv-va-pool.index     66    32   38 GiB      401  113 GiB   0.06     61 TiB
> default.rgw.jv-va-pool.non-ec    67    32      0 B        0      0 B      0     61 TiB
> jv-edi-pool                      68    32      0 B        0      0 B      0     61 TiB
> 
> -- Michael
> 
> -----Original Message-----
> From: Anthony D'Atri <anthony.datri@xxxxxxxxx>
> Sent: Wednesday, March 20, 2024 2:48 PM
> To: Michael Worsham <mworsham@xxxxxxxxxxxxxxxxxx>
> Cc: ceph-users@xxxxxxx
> Subject: Re:  Need easy way to calculate Ceph cluster space for SolarWinds
> 
> This is an external email. Please take care when clicking links or opening attachments. When in doubt, check with the Help Desk or Security.
> 
> 
>> On Mar 20, 2024, at 14:42, Michael Worsham <mworsham@xxxxxxxxxxxxxxxxxx> wrote:
>> 
>> Is there an easy way to poll a Ceph cluster to see how much space is
>> available
> 
> `ceph df`
> 
> The exporter has percentages per pool as well.
> 
> 
>> and how much space is available per bucket?
> 
> Are you using RGW quotas?
> 
>> 
>> Looking for a way to use SolarWinds to monitor the entire Ceph cluster space utilization and then also be able to break down each RGW bucket to see how much space it was provisioned for and how much is available.
> 
> RGW buckets do not provision space.  Optionally there may be some RGW quotas but they're a different thing than you're implying.
> 
> 
>> 
>> -- Michael
>> 
>> 
>> Get Outlook for Android<https://aka.ms/AAb9ysg> This message and its
>> attachments are from Data Dimensions and are intended only for the use of the individual or entity to which it is addressed, and may contain information that is privileged, confidential, and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, or the employee or agent responsible for delivering the message to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify the sender immediately and permanently delete the original email and destroy any copies or printouts of this email as well as any attachments.
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
>> email to ceph-users-leave@xxxxxxx
> 
> This message and its attachments are from Data Dimensions and are intended only for the use of the individual or entity to which it is addressed, and may contain information that is privileged, confidential, and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, or the employee or agent responsible for delivering the message to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify the sender immediately and permanently delete the original email and destroy any copies or printouts of this email as well as any attachments.
> 
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux