Re: Ceph Usage web and terminal.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello to all.

In my case I have the 7 node cluster Proxmox and working Ceph (ceph version 15.2.15  octopus (stable)": 7)

Ceph HEALTH_OK

ceph -s
  cluster:
    id:     9662e3fa-4ce6-41df-8d74-5deaa41a8dde
    health: HEALTH_OK

  services:
    mon: 7 daemons, quorum pve-3105,pve-3107,pve-3108,pve-3103,pve-3101,pve-3111,pve-3109 (age 17h)     mgr: pve-3107(active, since 41h), standbys: pve-3109, pve-3103, pve-3105, pve-3101, pve-3111, pve-3108
    mds: cephfs:1 {0=pve-3105=up:active} 6 up:standby
    osd: 22 osds: 22 up (since 17h), 22 in (since 17h)

  task status:

  data:
    pools:   4 pools, 1089 pgs
    objects: 1.09M objects, 4.1 TiB
    usage:   7.7 TiB used, 99 TiB / 106 TiB avail
    pgs:     1089 active+clean

---------------------------------------------------------------------------------------------------------------------

ceph osd tree

ID   CLASS  WEIGHT     TYPE NAME            STATUS  REWEIGHT PRI-AFF
 -1         106.43005  root default
-13          14.55478      host pve-3101
 10    hdd    7.27739          osd.10           up   1.00000 1.00000
 11    hdd    7.27739          osd.11           up   1.00000 1.00000
-11          14.55478      host pve-3103
  8    hdd    7.27739          osd.8            up   1.00000 1.00000
  9    hdd    7.27739          osd.9            up   1.00000 1.00000
 -3          14.55478      host pve-3105
  0    hdd    7.27739          osd.0            up   1.00000 1.00000
  1    hdd    7.27739          osd.1            up   1.00000 1.00000
 -5          14.55478      host pve-3107
  2    hdd    7.27739          osd.2            up   1.00000 1.00000
  3    hdd    7.27739          osd.3            up   1.00000 1.00000
 -9          14.55478      host pve-3108
  6    hdd    7.27739          osd.6            up   1.00000 1.00000
  7    hdd    7.27739          osd.7            up   1.00000 1.00000
 -7          14.55478      host pve-3109
  4    hdd    7.27739          osd.4            up   1.00000 1.00000
  5    hdd    7.27739          osd.5            up   1.00000 1.00000
-15          19.10138      host pve-3111
 12    hdd   10.91409          osd.12           up   1.00000 1.00000
 13    hdd    0.90970          osd.13           up   1.00000 1.00000
 14    hdd    0.90970          osd.14           up   1.00000 1.00000
 15    hdd    0.90970          osd.15           up   1.00000 1.00000
 16    hdd    0.90970          osd.16           up   1.00000 1.00000
 17    hdd    0.90970          osd.17           up   1.00000 1.00000
 18    hdd    0.90970          osd.18           up   1.00000 1.00000
 19    hdd    0.90970          osd.19           up   1.00000 1.00000
 20    hdd    0.90970          osd.20           up   1.00000 1.00000
 21    hdd    0.90970          osd.21           up   1.00000 1.00000

---------------------------------------------------------------------------------------------------------------

POOL                               ID  PGS   STORED   OBJECTS USED     %USED  MAX AVAIL vm.pool                            2  1024  3.0 TiB  863.31k  6.0 TiB   6.38     44 TiB  (this pool have the all VM disk)

---------------------------------------------------------------------------------------------------------------

ceph osd map vm.pool vm.pool.object
osdmap e14319 pool 'vm.pool' (2) object 'vm.pool.object' -> pg 2.196f68d5 (2.d5) -> up ([2,4], p2) acting ([2,4], p2)

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------

And now I have problem:

For all VM I have one pool for VM disks

When  node/host pve-3111  is shutdown in many of other nodes/hosts pve-3107, pve-3105  VM not shutdown but not available in network.

After the node/host is up Ceph back to HEALTH_OK and the all VM back to access in Network (without reboot).

Can some one to suggest me what I can to check in Ceph ?

Thanks.

27.10.2021 12:34, Сергей Цаболов пишет:
Hi,

27.10.2021 12:03, Eneko Lacunza пишет:
Hi,

El 27/10/21 a las 9:55, Сергей Цаболов escribió:
My instalation of ceph is:

6 Node of Proxmox with 2 disk (8 TB) on the every node.

I make 12 OSD from all 8TB disk.

Ceph installed is - ceph version 15.2.14  octopus (stable)

I installed 6 monitor (all runnig) and 6 Manager 1 of them runnig (*active*) all others is *standby*.

In ceph I have 4 pools

device_health_metrics Size/min 3/2, Crush Rule: replicated_rule, of PGs 1 PG Autoscale Mode : on , Min. # of PGs 1

cephfs_data Size/min 2/2, Crush Rule: replicated_rule,  of PGs 32 PG Autoscale Mode : on ,Min. # of PGs

cephfs_metadata Size/min 2/2, Crush Rule: replicated_rule,  of PGs 32 PG Autoscale Mode : on,Target Size: 500GB,  Min. # of PGs 16

pool_vm Size/min 2/2, Crush Rule: replicated_rule,  of PGs 512, PG Autoscale Mode : on,Target Ratio: 1

You're aware that size 2/2 makes it very likely you will have disk write problems, right (an OSD issue will prevent writes)?

No, I not have disk problem, I looking for when I lost more TB, because before of change Size & min Size  to 2/2 I have the 3/2 and the space is 28-29 TB.



And now it confuses me  usage pools on web and on terminal

Storage cephfs : on web I see 42.80 TB,  on terminal with ceph df : 39 TiB

Storage pool_vm: on web I see 45.27 TB, on terminal with ceph df : 39 TiB

This is TB->TiB conversion, 42.80TB = 42800000000000 bytes/1024⁴ ~= 39TiB

Ox, I not the count right space, is my mistake !!! 😉


Also, it can't reallistically be usage, must be total available space (roughly half the raw space due to your pools being replicated size=2)


All pools usage in terminal I see with ceph df

--- RAW STORAGE ---
CLASS  SIZE    AVAIL   USED     RAW USED  %RAW USED
hdd    87 TiB  83 TiB  4.6 TiB   4.6 TiB       5.27
TOTAL  87 TiB  83 TiB  4.6 TiB   4.6 TiB       5.27

--- POOLS ---
POOL                   ID  PGS  STORED   OBJECTS  USED %USED MAX AVAIL
device_health_metrics   1    1  1.2 MiB       12  3.6 MiB 0 26 TiB
pool_vm                 2  512  2.3 TiB  732.57k  4.6 TiB 5.55 39 TiB
cephfs_data             3   32      0 B        0      0 B 0 39 TiB
cephfs_metadata         4   32  9.8 MiB       24   21 MiB 0 39 TiB

I don't quite understand the discrepancy in TB usage on web and on terminal.
Maybe I misunderstood something.

P.S. And the question is which of usage disk I can use for stored data: the usage what I see on web or what I see on terminal?


Hope this helps ;)

Cheers

Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project

Tel. +34 943 569 206 | https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun

https://www.youtube.com/user/CANALBINOVO
https://www.linkedin.com/company/37269706/

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

--
-------------------------
Best Regards

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux