Re: Ceph Usage web and terminal.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Also note that you need an odd number of MONs to be able to form a quorum. So I would recommend to remove one MON to have 5.


Zitat von Eneko Lacunza <elacunza@xxxxxxxxx>:

Hi,

El 27/10/21 a las 9:55, Сергей Цаболов escribió:
My instalation of ceph is:

6 Node of Proxmox with 2 disk (8 TB) on the every node.

I make 12 OSD from all 8TB disk.

Ceph installed is - ceph version 15.2.14  octopus (stable)

I installed 6 monitor (all runnig) and 6 Manager 1 of them runnig (*active*) all others is *standby*.

In ceph I have 4 pools

device_health_metrics Size/min 3/2, Crush Rule: replicated_rule, of PGs 1 PG Autoscale Mode : on , Min. # of PGs 1

cephfs_data Size/min 2/2, Crush Rule: replicated_rule,  of PGs 32 PG Autoscale Mode : on ,Min. # of PGs

cephfs_metadata Size/min 2/2, Crush Rule: replicated_rule,  of PGs 32 PG Autoscale Mode : on,Target Size: 500GB,  Min. # of PGs 16

pool_vm Size/min 2/2, Crush Rule: replicated_rule,  of PGs 512, PG Autoscale Mode : on,Target Ratio: 1

You're aware that size 2/2 makes it very likely you will have disk write problems, right (an OSD issue will prevent writes)?


And now it confuses me  usage pools on web and on terminal

Storage cephfs : on web I see 42.80 TB,  on terminal with ceph df : 39 TiB

Storage pool_vm: on web I see 45.27 TB, on terminal with   ceph df : 39 TiB

This is TB->TiB conversion, 42.80TB = 42800000000000 bytes/1024⁴ ~= 39TiB

Also, it can't reallistically be usage, must be total available space (roughly half the raw space due to your pools being replicated size=2)


All pools usage in terminal I see with ceph df

--- RAW STORAGE ---
CLASS  SIZE    AVAIL   USED     RAW USED  %RAW USED
hdd    87 TiB  83 TiB  4.6 TiB   4.6 TiB       5.27
TOTAL  87 TiB  83 TiB  4.6 TiB   4.6 TiB       5.27

--- POOLS ---
POOL                   ID  PGS  STORED   OBJECTS  USED %USED  MAX AVAIL
device_health_metrics   1    1  1.2 MiB       12  3.6 MiB 0     26 TiB
pool_vm                 2  512  2.3 TiB  732.57k  4.6 TiB 5.55 39 TiB
cephfs_data             3   32      0 B        0      0 B 0     39 TiB
cephfs_metadata         4   32  9.8 MiB       24   21 MiB 0     39 TiB

I don't quite understand the discrepancy in TB usage on web and on terminal.
Maybe I misunderstood something.

P.S. And the question is which of usage disk I can use for stored data: the usage what I see on web or what I see on terminal?


Hope this helps ;)

Cheers

Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project

Tel. +34 943 569 206 | https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun

https://www.youtube.com/user/CANALBINOVO
https://www.linkedin.com/company/37269706/

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux