Ceph Usage web and terminal.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

My instalation of ceph is:

6 Node of Proxmox with 2 disk (8 TB) on the every node.

I make 12 OSD from all 8TB disk.

Ceph installed is - ceph version 15.2.14  octopus (stable)

I installed 6 monitor (all runnig) and 6 Manager 1 of them runnig (*active*) all others is *standby*.

In ceph I have 4 pools

device_health_metrics Size/min 3/2, Crush Rule: replicated_rule, of PGs 1 PG Autoscale Mode : on , Min. # of PGs 1

cephfs_data Size/min 2/2, Crush Rule: replicated_rule,  of PGs 32 PG Autoscale Mode : on ,Min. # of PGs

cephfs_metadata Size/min 2/2, Crush Rule: replicated_rule,  of PGs 32 PG Autoscale Mode : on,Target Size: 500GB,  Min. # of PGs 16

pool_vm Size/min 2/2, Crush Rule: replicated_rule,  of PGs 512, PG Autoscale Mode : on,Target Ratio: 1

And now it confuses me  usage pools on web and on terminal

Storage cephfs : on web I see 42.80 TB,  on terminal with ceph df : 39 TiB

Storage pool_vm: on web I see 45.27 TB, on terminal with   ceph df : 39 TiB

All pools usage in terminal I see with ceph df

--- RAW STORAGE ---
CLASS  SIZE    AVAIL   USED     RAW USED  %RAW USED
hdd    87 TiB  83 TiB  4.6 TiB   4.6 TiB       5.27
TOTAL  87 TiB  83 TiB  4.6 TiB   4.6 TiB       5.27

--- POOLS ---
POOL                   ID  PGS  STORED   OBJECTS  USED %USED  MAX AVAIL
device_health_metrics   1    1  1.2 MiB       12  3.6 MiB 0     26 TiB
pool_vm                 2  512  2.3 TiB  732.57k  4.6 TiB 5.55     39 TiB
cephfs_data             3   32      0 B        0      0 B 0     39 TiB
cephfs_metadata         4   32  9.8 MiB       24   21 MiB 0     39 TiB

I don't quite understand the discrepancy in TB usage on web and on terminal.
Maybe I misunderstood something.

P.S. And the question is which of usage disk I can use for stored data: the usage what I see on web or what I see on terminal?


--
-------------------------
Sergey TS

With Best Regard

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux