Re: Ceph Usage web and terminal.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

27.10.2021 12:03, Eneko Lacunza пишет:
Hi,

El 27/10/21 a las 9:55, Сергей Цаболов escribió:
My instalation of ceph is:

6 Node of Proxmox with 2 disk (8 TB) on the every node.

I make 12 OSD from all 8TB disk.

Ceph installed is - ceph version 15.2.14  octopus (stable)

I installed 6 monitor (all runnig) and 6 Manager 1 of them runnig (*active*) all others is *standby*.

In ceph I have 4 pools

device_health_metrics Size/min 3/2, Crush Rule: replicated_rule, of PGs 1 PG Autoscale Mode : on , Min. # of PGs 1

cephfs_data Size/min 2/2, Crush Rule: replicated_rule,  of PGs 32 PG Autoscale Mode : on ,Min. # of PGs

cephfs_metadata Size/min 2/2, Crush Rule: replicated_rule,  of PGs 32 PG Autoscale Mode : on,Target Size: 500GB,  Min. # of PGs 16

pool_vm Size/min 2/2, Crush Rule: replicated_rule,  of PGs 512, PG Autoscale Mode : on,Target Ratio: 1

You're aware that size 2/2 makes it very likely you will have disk write problems, right (an OSD issue will prevent writes)?

No, I not have disk problem, I looking for when I lost more TB, because before of change Size & min Size  to 2/2 I have the 3/2 and the space is 28-29 TB.



And now it confuses me  usage pools on web and on terminal

Storage cephfs : on web I see 42.80 TB,  on terminal with ceph df : 39 TiB

Storage pool_vm: on web I see 45.27 TB, on terminal with   ceph df : 39 TiB

This is TB->TiB conversion, 42.80TB = 42800000000000 bytes/1024⁴ ~= 39TiB

Ox, I not the count right space, is my mistake !!! 😉


Also, it can't reallistically be usage, must be total available space (roughly half the raw space due to your pools being replicated size=2)


All pools usage in terminal I see with ceph df

--- RAW STORAGE ---
CLASS  SIZE    AVAIL   USED     RAW USED  %RAW USED
hdd    87 TiB  83 TiB  4.6 TiB   4.6 TiB       5.27
TOTAL  87 TiB  83 TiB  4.6 TiB   4.6 TiB       5.27

--- POOLS ---
POOL                   ID  PGS  STORED   OBJECTS  USED %USED MAX AVAIL
device_health_metrics   1    1  1.2 MiB       12  3.6 MiB 0 26 TiB
pool_vm                 2  512  2.3 TiB  732.57k  4.6 TiB 5.55 39 TiB
cephfs_data             3   32      0 B        0      0 B 0 39 TiB
cephfs_metadata         4   32  9.8 MiB       24   21 MiB 0 39 TiB

I don't quite understand the discrepancy in TB usage on web and on terminal.
Maybe I misunderstood something.

P.S. And the question is which of usage disk I can use for stored data: the usage what I see on web or what I see on terminal?


Hope this helps ;)

Cheers

Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project

Tel. +34 943 569 206 | https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun

https://www.youtube.com/user/CANALBINOVO
https://www.linkedin.com/company/37269706/

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

--
-------------------------
С уважением
Сергей Цаболов,
Системный администратор
ООО "Т8"
Тел.: +74992716161,
Моб: +79850334875
tsabolov@xxxxx
ООО «Т8», 107076, г. Москва, Краснобогатырская ул., д. 44, стр.1
www.t8.ru

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux