Hi Max,
Thank you - the mgr log complains about overlapping roots, so this
indeed the cause :)
2022-06-10T13:56:37.669+0100 7f641f7e3700 0 [pg_autoscaler ERROR root]
pool 14 has overlapping roots: {-1, -2}
2022-06-10T13:56:37.675+0100 7f641f7e3700 0 [pg_autoscaler WARNING
root] pool 4 contains an overlapping root -1... skipping scaling
2022-06-10T13:56:37.677+0100 7f641f7e3700 0 [pg_autoscaler WARNING
root] pool 8 contains an overlapping root -1... skipping scaling
2022-06-10T13:56:37.678+0100 7f641f7e3700 0 [pg_autoscaler WARNING
root] pool 14 contains an overlapping root -2... skipping scaling
I removed the pools, then starting from the following crush map
-------------
[root@wilma-s1 ~]# ceph osd crush rule dump
[
{
"rule_id": 0,
"rule_name": "ssd_replicated",
"type": 1,
"steps": [
{
"op": "take",
"item": -2,
"item_name": "default~ssd"
},
{
"op": "chooseleaf_firstn",
"num": 0,
"type": "host"
},
{
"op": "emit"
}
]
},
{
"rule_id": 3,
"rule_name": "replicated_hdd",
"type": 1,
"steps": [
{
"op": "take",
"item": -15,
"item_name": "default~hdd"
},
{
"op": "chooseleaf_firstn",
"num": 0,
"type": "host"
},
{
"op": "emit"
}
]
}
]
-------------
[root@wilma-s1 ~]# ceph osd pool create .mgr 32 32 replicated_hdd
pool '.mgr' created
[root@wilma-s1 ~]# ceph osd pool create mds_ssd 32 32 ssd_replicated
pool 'mds_ssd' created
[root@wilma-s1 ~]# ceph osd pool autoscale-status
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO
EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE BULK
.mgr 0 3.0 7198T 0.0000
1.0 32 on False
mds_ssd 0 3.0 2794G 0.0000
1.0 32 on False
[root@wilma-s1 ~]# ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 7.0 PiB 6.9 PiB 126 TiB 126 TiB 1.75
ssd 2.7 TiB 2.7 TiB 3.1 GiB 3.1 GiB 0.11
TOTAL 7.0 PiB 6.9 PiB 126 TiB 126 TiB 1.75
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
.mgr 21 32 0 B 0 0 B 0 2.2 PiB
mds_ssd 22 32 0 B 0 0 B 0 884 GiB
mgr log:
2022-06-10T15:14:16.894+0100 7efc8be7f700 0 [pg_autoscaler INFO root]
effective_target_ratio 0.0 0.0 0 7914317240991744
2022-06-10T15:14:16.894+0100 7efc8be7f700 0 [pg_autoscaler INFO root]
Pool '.mgr' root_id -15 using 0.0 of space, bias 1.0, pg target 0.0
quantized to 32 (current 32)
2022-06-10T15:14:16.895+0100 7efc8be7f700 0 [pg_autoscaler INFO root]
effective_target_ratio 0.0 0.0 0 3000605081600
2022-06-10T15:14:16.895+0100 7efc8be7f700 0 [pg_autoscaler INFO root]
Pool 'mds_ssd' root_id -2 using 0.0 of space, bias 1.0, pg target 0.0
quantized to 32 (current 32)
As this works, now add an EC pool:
[root@wilma-s1 ~]# ceph osd erasure-code-profile set eight_two k=8 m=2
crush-failure-domain=host crush-device-class=hdd
[root@wilma-s1 ~]# ceph osd pool create ec82pool 1024 1024 erasure eight_two
pool 'ec82pool' created
[root@wilma-s1 ~]# ceph osd pool autoscale-status
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO
EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE BULK
.mgr 0 3.0 7198T 0.0000
1.0 32 on False
mds_ssd 0 3.0 2794G 0.0000
1.0 32 on False
ec82pool 0 1.25 7198T 0.0000
1.0 32 on False
[root@wilma-s1 ~]# ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 7.0 PiB 6.9 PiB 126 TiB 126 TiB 1.75
ssd 2.7 TiB 2.7 TiB 3.2 GiB 3.2 GiB 0.12
TOTAL 7.0 PiB 6.9 PiB 126 TiB 126 TiB 1.75
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
.mgr 21 32 0 B 0 0 B 0 2.2 PiB
mds_ssd 22 32 0 B 0 0 B 0 884 GiB
ec82pool 23 995 0 B 0 0 B 0 5.2 PiB
all is now looking good.
thanks again,
Jake
--
Dr Jake Grimmett
Head Of Scientific Computing
MRC Laboratory of Molecular Biology
Francis Crick Avenue,
Cambridge CB2 0QH, UK.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx