ceph fs resize

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi!

My ceph version is ceph version 15.2.3 (d289bbdec69ed7c1f516e0a093594580a76b78d0) octopus (stable)

I have ceph fs and I add new osd to my cluster.

ceph pg stat:

289 pgs: 1 active+clean+scrubbing+deep, 288 active+clean; 873 GiB data, 2.7 TiB used, 3.5 TiB / 6.2 TiB avail

How I can extend my ceph fs from 3.5 TiB to 6.2 TiB avail

Detail information:

ceph fs status
static - 2 clients
======
RANK  STATE           MDS              ACTIVITY     DNS    INOS
 0    active  static.ceph02.sgpdiv  Reqs:    0 /s   136k   128k
      POOL         TYPE     USED  AVAIL
static_metadata  metadata  10.1G   973G
     static        data    2754G   973G
    STANDBY MDS
static.ceph05.aylgvy
static.ceph04.wsljnw
MDS version: ceph version 15.2.3 (d289bbdec69ed7c1f516e0a093594580a76b78d0) octopus (stable)

ceph osd pool autoscale-status
POOL                     SIZE  TARGET SIZE  RATE  RAW CAPACITY RATIO  TARGET RATIO  EFFECTIVE RATIO  BIAS  PG_NUM  NEW PG_NUM AUTOSCALE device_health_metrics  896.1k                3.0         6399G 0.0000                                  1.0       1 on static                 874.0G                3.0         6399G 0.4097                                  1.0     256 on static_metadata         3472M                3.0         6399G 0.0016                                  4.0      32 on

ceph df
--- RAW STORAGE ---
CLASS  SIZE     AVAIL    USED     RAW USED  %RAW USED
hdd    6.2 TiB  3.5 TiB  2.7 TiB   2.7 TiB      43.55
TOTAL  6.2 TiB  3.5 TiB  2.7 TiB   2.7 TiB      43.55

--- POOLS ---
POOL                   ID  STORED   OBJECTS  USED     %USED  MAX AVAIL
device_health_metrics   1  896 KiB       20  2.6 MiB      0    973 GiB
static                 14  874 GiB    1.44M  2.7 TiB  48.54    973 GiB
static_metadata        15  3.4 GiB    2.53M   10 GiB   0.35    973 GiB

ceph fs volume ls
[
    {
        "name": "static"
    }
]

ceph fs subvolume ls static
[]
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux