Re: available space seems low

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I should also say that I enabled the balancer with upmap mode, since the only client (the backup server) is also running nautilus.

Seth

On 12/6/21 4:09 PM, Seth Galitzer wrote:
I'm running ceph 14.2.20 on Centos7, installed from the official ceph-nautilus repo. I started a manual rebalance run amd will set it back to auto once that is done. But I'm already seeing cluster score of 0.015045, so I'm not sure what more it can do.

Thanks.
Seth

# ceph osd crush rule dump
[
     {
         "rule_id": 0,
         "rule_name": "replicated_rule",
         "ruleset": 0,
         "type": 1,
         "min_size": 1,
         "max_size": 10,
         "steps": [
             {
                 "op": "take",
                 "item": -1,
                 "item_name": "default"
             },
             {
                 "op": "chooseleaf_firstn",
                 "num": 0,
                 "type": "host"
             },
             {
                 "op": "emit"
             }
         ]
     }
]


On 12/6/21 3:51 PM, Anthony D'Atri wrote:
This email originated from outside of K-State.


Shouda asked for your CRUSH rule(s), but with such a small cluster your failure domain is likely “host”.

The fullness imbalance is almost certainly the culprit here.

Which Nautilus release are you running?

On Dec 6, 2021, at 1:50 PM, Seth Galitzer <sgsax@xxxxxxx> wrote:

Anthony,

Thanks for the input. I've got my command outputs below. As for the balancer, I didn't realize it was off. Another colleague had suggested this previously, but I didn't get very far with it before. I didn't think much about it at the time since everything automatically rebalanced when I added the new osds. I got it started now so we'll see what that gets me.

Thanks.
Seth

# ceph osd tree
ID  CLASS WEIGHT    TYPE NAME      STATUS REWEIGHT PRI-AFF
-1       461.06946 root default
-3        87.76190     host cfsb1
  0   hdd   7.31349         osd.0      up  1.00000 1.00000
  1   hdd   7.31349         osd.1      up  1.00000 1.00000
  2   hdd   7.31349         osd.2      up  1.00000 1.00000
  3   hdd   7.31349         osd.3      up  1.00000 1.00000
  4   hdd   7.31349         osd.4      up  1.00000 1.00000
  5   hdd   7.31349         osd.5      up  1.00000 1.00000
  6   hdd   7.31349         osd.6      up  1.00000 1.00000
  7   hdd   7.31349         osd.7      up  1.00000 1.00000
  8   hdd   7.31349         osd.8      up  1.00000 1.00000
  9   hdd   7.31349         osd.9      up  1.00000 1.00000
10   hdd   7.31349         osd.10     up  1.00000 1.00000
11   hdd   7.31349         osd.11     up  1.00000 1.00000
-5        87.76190     host cfsb2
12   hdd   7.31349         osd.12     up  1.00000 1.00000
13   hdd   7.31349         osd.13     up  1.00000 1.00000
14   hdd   7.31349         osd.14     up  1.00000 1.00000
15   hdd   7.31349         osd.15     up  1.00000 1.00000
16   hdd   7.31349         osd.16     up  1.00000 1.00000
17   hdd   7.31349         osd.17     up  1.00000 1.00000
18   hdd   7.31349         osd.18     up  1.00000 1.00000
19   hdd   7.31349         osd.19     up  1.00000 1.00000
20   hdd   7.31349         osd.20     up  1.00000 1.00000
21   hdd   7.31349         osd.21     up  1.00000 1.00000
22   hdd   7.31349         osd.22     up  1.00000 1.00000
23   hdd   7.31349         osd.23     up  1.00000 1.00000
-7        87.76190     host cfsb3
24   hdd   7.31349         osd.24     up  1.00000 1.00000
25   hdd   7.31349         osd.25     up  1.00000 1.00000
26   hdd   7.31349         osd.26     up  1.00000 1.00000
27   hdd   7.31349         osd.27     up  1.00000 1.00000
28   hdd   7.31349         osd.28     up  1.00000 1.00000
29   hdd   7.31349         osd.29     up  1.00000 1.00000
30   hdd   7.31349         osd.30     up  1.00000 1.00000
31   hdd   7.31349         osd.31     up  1.00000 1.00000
32   hdd   7.31349         osd.32     up  1.00000 1.00000
33   hdd   7.31349         osd.33     up  1.00000 1.00000
34   hdd   7.31349         osd.34     up  1.00000 1.00000
35   hdd   7.31349         osd.35     up  1.00000 1.00000
-9        87.76190     host cfsb4
36   hdd   7.31349         osd.36     up  1.00000 1.00000
37   hdd   7.31349         osd.37     up  1.00000 1.00000
38   hdd   7.31349         osd.38     up  1.00000 1.00000
39   hdd   7.31349         osd.39     up  1.00000 1.00000
40   hdd   7.31349         osd.40     up  1.00000 1.00000
41   hdd   7.31349         osd.41     up  1.00000 1.00000
42   hdd   7.31349         osd.42     up  1.00000 1.00000
43   hdd   7.31349         osd.43     up  1.00000 1.00000
44   hdd   7.31349         osd.44     up  1.00000 1.00000
45   hdd   7.31349         osd.45     up  1.00000 1.00000
46   hdd   7.31349         osd.46     up  1.00000 1.00000
47   hdd   7.31349         osd.47     up  1.00000 1.00000
-11       110.02185     host cfsb5
48   hdd   9.16849         osd.48     up  1.00000 1.00000
49   hdd   9.16849         osd.49     up  1.00000 1.00000
50   hdd   9.16849         osd.50     up  1.00000 1.00000
51   hdd   9.16849         osd.51     up  1.00000 1.00000
52   hdd   9.16849         osd.52     up  1.00000 1.00000
53   hdd   9.16849         osd.53     up  1.00000 1.00000
54   hdd   9.16849         osd.54     up  1.00000 1.00000
55   hdd   9.16849         osd.55     up  1.00000 1.00000
56   hdd   9.16849         osd.56     up  1.00000 1.00000
57   hdd   9.16849         osd.57     up  1.00000 1.00000
58   hdd   9.16849         osd.58     up  1.00000 1.00000
59   hdd   9.16849         osd.59     up  1.00000 1.00000


# ceph osd df
ID CLASS WEIGHT  REWEIGHT SIZE    RAW USE DATA    OMAP    META AVAIL    %USE  VAR  PGS STATUS 0   hdd 7.31349  1.00000 7.3 TiB 4.8 TiB 4.8 TiB   8 KiB 8.7 GiB  2.5 TiB 66.02 1.05 108     up 1   hdd 7.31349  1.00000 7.3 TiB 4.9 TiB 4.9 TiB  16 KiB 8.8 GiB  2.4 TiB 67.33 1.07 109     up 2   hdd 7.31349  1.00000 7.3 TiB 4.5 TiB 4.5 TiB  76 KiB 8.3 GiB  2.8 TiB 62.15 0.98 101     up 3   hdd 7.31349  1.00000 7.3 TiB 4.9 TiB 4.8 TiB   4 KiB 9.2 GiB  2.4 TiB 66.58 1.05 109     up 4   hdd 7.31349  1.00000 7.3 TiB 5.3 TiB 5.3 TiB  64 KiB 9.6 GiB  2.0 TiB 72.99 1.16 121     up 5   hdd 7.31349  1.00000 7.3 TiB 4.6 TiB 4.6 TiB  52 KiB 8.8 GiB  2.7 TiB 62.79 0.99 106     up 6   hdd 7.31349  1.00000 7.3 TiB 3.9 TiB 3.9 TiB  60 KiB 7.5 GiB  3.4 TiB 53.74 0.85  91     up 7   hdd 7.31349  1.00000 7.3 TiB 4.8 TiB 4.7 TiB   8 KiB 8.7 GiB  2.5 TiB 65.26 1.03 104     up 8   hdd 7.31349  1.00000 7.3 TiB 4.9 TiB 4.9 TiB  20 KiB 8.9 GiB  2.4 TiB 67.28 1.07 108     up 9   hdd 7.31349  1.00000 7.3 TiB 4.3 TiB 4.2 TiB  24 KiB 8.3 GiB  3.1 TiB 58.25 0.92  94     up 10   hdd 7.31349  1.00000 7.3 TiB 5.2 TiB 5.2 TiB  16 KiB 9.4 GiB 2.1 TiB 71.08 1.13 119     up 11   hdd 7.31349  1.00000 7.3 TiB 4.4 TiB 4.3 TiB 100 KiB 8.3 GiB 3.0 TiB 59.53 0.94  97     up 12   hdd 7.31349  1.00000 7.3 TiB 4.6 TiB 4.6 TiB  40 KiB 8.5 GiB 2.7 TiB 62.76 0.99 106     up 13   hdd 7.31349  1.00000 7.3 TiB 4.6 TiB 4.6 TiB 108 KiB 8.9 GiB 2.7 TiB 63.42 1.00 105     up 14   hdd 7.31349  1.00000 7.3 TiB 5.7 TiB 5.6 TiB  20 KiB 9.7 GiB 1.6 TiB 77.62 1.23 126     up 15   hdd 7.31349  1.00000 7.3 TiB 4.2 TiB 4.1 TiB   4 KiB 8.5 GiB 3.1 TiB 57.10 0.90  93     up 16   hdd 7.31349  1.00000 7.3 TiB 4.6 TiB 4.6 TiB  48 KiB 8.5 GiB 2.7 TiB 63.46 1.00 103     up 17   hdd 7.31349  1.00000 7.3 TiB 4.4 TiB 4.3 TiB 116 KiB 8.3 GiB 3.0 TiB 59.58 0.94 102     up 18   hdd 7.31349  1.00000 7.3 TiB 4.8 TiB 4.8 TiB  60 KiB 8.9 GiB 2.5 TiB 66.02 1.05 109     up 19   hdd 7.31349  1.00000 7.3 TiB 4.0 TiB 4.0 TiB 519 KiB 8.0 GiB 3.3 TiB 55.02 0.87  90     up 20   hdd 7.31349  1.00000 7.3 TiB 4.9 TiB 4.9 TiB  32 KiB 8.9 GiB 2.4 TiB 67.38 1.07 110     up 21   hdd 7.31349  1.00000 7.3 TiB 5.1 TiB 5.1 TiB  64 KiB 9.2 GiB 2.2 TiB 69.77 1.10 111     up 22   hdd 7.31349  1.00000 7.3 TiB 5.2 TiB 5.1 TiB  40 KiB 9.1 GiB 2.2 TiB 70.46 1.12 112     up 23   hdd 7.31349  1.00000 7.3 TiB 4.8 TiB 4.8 TiB  56 KiB 8.9 GiB 2.5 TiB 66.06 1.05 107     up 24   hdd 7.31349  1.00000 7.3 TiB 5.6 TiB 5.6 TiB  12 KiB 9.8 GiB 1.7 TiB 76.41 1.21 129     up 25   hdd 7.31349  1.00000 7.3 TiB 4.2 TiB 4.2 TiB   4 KiB 8.3 GiB 3.1 TiB 57.68 0.91  97     up 26   hdd 7.31349  1.00000 7.3 TiB 4.5 TiB 4.5 TiB  16 KiB 8.6 GiB 2.8 TiB 61.42 0.97  98     up 27   hdd 7.31349  1.00000 7.3 TiB 5.0 TiB 5.0 TiB  40 KiB 9.0 GiB 2.3 TiB 68.59 1.09 109     up 28   hdd 7.31349  1.00000 7.3 TiB 4.8 TiB 4.8 TiB  64 KiB 8.7 GiB 2.5 TiB 65.91 1.04 112     up 29   hdd 7.31349  1.00000 7.3 TiB 4.7 TiB 4.7 TiB   4 KiB 9.1 GiB 2.6 TiB 64.68 1.02 106     up 30   hdd 7.31349  1.00000 7.3 TiB 4.7 TiB 4.7 TiB  12 KiB 8.5 GiB 2.6 TiB 64.11 1.02 103     up 31   hdd 7.31349  1.00000 7.3 TiB 6.3 TiB 6.3 TiB 527 KiB  10 GiB 1004 GiB 86.60 1.37 146     up 32   hdd 7.31349  1.00000 7.3 TiB 4.0 TiB 3.9 TiB   4 KiB 7.8 GiB 3.3 TiB 54.47 0.86  92     up 33   hdd 7.31349  1.00000 7.3 TiB 4.3 TiB 4.3 TiB  40 KiB 8.4 GiB 3.0 TiB 58.91 0.93 101     up 34   hdd 7.31349  1.00000 7.3 TiB 4.4 TiB 4.4 TiB  56 KiB 8.3 GiB 2.9 TiB 60.20 0.95  99     up 35   hdd 7.31349  1.00000 7.3 TiB 5.3 TiB 5.3 TiB  52 KiB 9.4 GiB 2.0 TiB 72.40 1.15 120     up 36   hdd 7.31349  1.00000 7.3 TiB 5.1 TiB 5.1 TiB   4 KiB 9.2 GiB 2.2 TiB 69.82 1.11 116     up 37   hdd 7.31349  1.00000 7.3 TiB 3.7 TiB 3.7 TiB  20 KiB 7.9 GiB 3.6 TiB 51.21 0.81  82     up 38   hdd 7.31349  1.00000 7.3 TiB 4.3 TiB 4.3 TiB 419 KiB 8.0 GiB 3.0 TiB 58.89 0.93  99     up 39   hdd 7.31349  1.00000 7.3 TiB 5.3 TiB 5.3 TiB  48 KiB 9.9 GiB 2.0 TiB 73.10 1.16 117     up 40   hdd 7.31349  1.00000 7.3 TiB 4.2 TiB 4.2 TiB 104 KiB 8.4 GiB 3.1 TiB 57.58 0.91  96     up 41   hdd 7.31349  1.00000 7.3 TiB 4.6 TiB 4.6 TiB  68 KiB 8.7 GiB 2.7 TiB 63.43 1.00 101     up 42   hdd 7.31349  1.00000 7.3 TiB 4.7 TiB 4.6 TiB  16 KiB 8.9 GiB 2.6 TiB 64.05 1.01 103     up 43   hdd 7.31349  1.00000 7.3 TiB 5.0 TiB 4.9 TiB  60 KiB 8.9 GiB 2.4 TiB 67.84 1.07 108     up 44   hdd 7.31349  1.00000 7.3 TiB 5.0 TiB 4.9 TiB  24 KiB 9.0 GiB 2.3 TiB 68.01 1.08 109     up 45   hdd 7.31349  1.00000 7.3 TiB 4.9 TiB 4.9 TiB  28 KiB 9.0 GiB 2.4 TiB 67.30 1.07 114     up 46   hdd 7.31349  1.00000 7.3 TiB 5.1 TiB 5.0 TiB  44 KiB 9.3 GiB 2.3 TiB 69.20 1.10 110     up 47   hdd 7.31349  1.00000 7.3 TiB 4.3 TiB 4.3 TiB  20 KiB 8.4 GiB 3.0 TiB 58.97 0.93  95     up 48   hdd 9.16849  1.00000 9.2 TiB 5.3 TiB 5.3 TiB 515 KiB 7.2 GiB 3.8 TiB 58.17 0.92 120     up 49   hdd 9.16849  1.00000 9.2 TiB 5.3 TiB 5.3 TiB  96 KiB 7.2 GiB 3.8 TiB 58.09 0.92 118     up 50   hdd 9.16849  1.00000 9.2 TiB 5.1 TiB 5.0 TiB  92 KiB 6.9 GiB 4.1 TiB 55.10 0.87 110     up 51   hdd 9.16849  1.00000 9.2 TiB 5.9 TiB 5.9 TiB  72 KiB 8.2 GiB 3.2 TiB 64.82 1.03 136     up 52   hdd 9.16849  1.00000 9.2 TiB 5.3 TiB 5.2 TiB  44 KiB 7.2 GiB 3.8 TiB 58.06 0.92 123     up 53   hdd 9.16849  1.00000 9.2 TiB 5.9 TiB 5.8 TiB  52 KiB 7.8 GiB 3.3 TiB 64.27 1.02 137     up 54   hdd 9.16849  1.00000 9.2 TiB 4.7 TiB 4.6 TiB  68 KiB 6.3 GiB 4.4 TiB 51.52 0.82 105     up 55   hdd 9.16849  1.00000 9.2 TiB 4.3 TiB 4.3 TiB     0 B 5.9 GiB 4.8 TiB 47.43 0.75  99     up 56   hdd 9.16849  1.00000 9.2 TiB 5.4 TiB 5.4 TiB  72 KiB 7.7 GiB 3.7 TiB 59.24 0.94 121     up 57   hdd 9.16849  1.00000 9.2 TiB 5.2 TiB 5.1 TiB  16 KiB 7.1 GiB 4.0 TiB 56.68 0.90 112     up 58   hdd 9.16849  1.00000 9.2 TiB 5.9 TiB 5.8 TiB  16 KiB 7.9 GiB 3.3 TiB 63.82 1.01 131     up 59   hdd 9.16849  1.00000 9.2 TiB 5.1 TiB 5.0 TiB  40 KiB 6.8 GiB 4.1 TiB 55.64 0.88 113     up                     TOTAL 461 TiB 291 TiB 288 TiB 4.2 MiB 508 GiB 170 TiB 63.15
MIN/MAX VAR: 0.75/1.37  STDDEV: 6.93




On 12/6/21 3:20 PM, Anthony D'Atri wrote:
This email originated from outside of K-State.
Show me
`ceph osd tree`
`ceph osd df`
You could have two things going on.
1) Depending on your failure domain, CRUSH may be limited to the smallest node/rack’s capacity
2) Probably OSD fullness imbalance is the problem
with a range of 47%-86%. Only one is at 86% and that one is reporting nearfull.
There you go.  `ceph df` will show you available space based on the most-full OSD.  That’s a very high range.  Get the balancer going and you’ll get more space.
On Dec 6, 2021, at 1:13 PM, Seth Galitzer <sgsax@xxxxxxx> wrote:

I have a fairly vanilla ceph nautilus setup. One node that is the mgr, mds, and primary mon. Four nodes with 12 8TB osds each, two of which are backup mons. I am configured for 3 replicas and 2048 pgs, per the calculator. I recently added a new node with 12 10TB osds. Because of my 3 replicas, I assumed I would get about 40TB available added out of the 120TB raw total. After rebalancing, I only gained about 20TB. Can anybody tell me why this might be, or if there's anything else I can do to gain more available space out of my setup?

This information may be helpful to explain my situation. I use this cluster as storage for bareos backups. Average space used on all 60 osds is 63% each, with a range of 47%-86%. Only one is at 86% and that one is reporting nearfull. I have two pools: data and metadata. Both are sharing the same osds. Each osd node has a separate dedicated SSD for storing journal.

Output from ceph fs status:
cephfs - 1 clients
======
+------+--------+-------+---------------+-------+-------+
| Rank | State  |  MDS  |    Activity   |  dns  |  inos |
+------+--------+-------+---------------+-------+-------+
|  0   | active | cfsb0 | Reqs:    0 /s |  997  | 1000  |
+------+--------+-------+---------------+-------+-------+
+-----------------+----------+-------+-------+
|       Pool      |   type   |  used | avail |
+-----------------+----------+-------+-------+
| cephfs_metadata | metadata |  521M | 12.9T |
|   cephfs_data   |   data   |  288T | 12.9T |
+-----------------+----------+-------+-------+
+-------------+
| Standby MDS |
+-------------+
+-------------+

Output from ceph -s:
HEALTH_WARN 1 nearfull osd(s); 2 pool(s) nearfull
OSD_NEARFULL 1 nearfull osd(s)
    osd.31 is near full
POOL_NEARFULL 2 pool(s) nearfull
    pool 'cephfs_data' is nearfull
    pool 'cephfs_metadata' is nearfull
[root@cfsb0 ~]# ceph -s
  cluster:
    id:     a5acca4a-22bc-45bd-b0a6-f45e713101ff
    health: HEALTH_WARN
            1 nearfull osd(s)
            2 pool(s) nearfull

  services:
    mon: 3 daemons, quorum cfsb0,cfsb1,cfsb2 (age 3w)
    mgr: cfsb0(active, since 3w)
    mds: cephfs:1 {0=cfsb0=up:active}
    osd: 60 osds: 60 up (since 2w), 60 in (since 2w)

  data:
    pools:   2 pools, 2176 pgs
    objects: 25.21M objects, 96 TiB
    usage:   291 TiB used, 170 TiB / 461 TiB avail
    pgs:     2175 active+clean
             1    active+clean+scrubbing+deep

Output from df on the backup server as the ceph  client:
$ df -h
Filesystem                        Size  Used Avail Use% Mounted on
<snip>
<ceph.mgr.ip>:/backup             110T   97T   13T  89% /mnt/backup

So I'm confused. Do I have 461TB, 300TB, or 110TB usable space out of 504TB raw? Why did I only gain 20TB from adding 120TB of raw osds? Is there anything I can tune to get more usable space?

I'm happy to provide additional information if it would be helpful.

Thanks.
Seth

--
Seth Galitzer
IT Administrator
Computer Science Department
Carl R. Ice College of Engineering
Kansas State University
http://www.cs.ksu.edu/~sgsax
sgsax@xxxxxxx
785-532-7790
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

--
Seth Galitzer
IT Administrator
Computer Science Department
Carl R. Ice College of Engineering
Kansas State University
http://www.cs.ksu.edu/~sgsax
sgsax@xxxxxxx
785-532-7790



--
Seth Galitzer
IT Administrator
Computer Science Department
Carl R. Ice College of Engineering
Kansas State University
http://www.cs.ksu.edu/~sgsax
sgsax@xxxxxxx
785-532-7790
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux