Hello!
We have a small proxmox farm withceph consisting of three nodes.
Each node has 6 disks each with a capacity of 4 TB.
A only one pool has been created on these disks.
Size 2/1.
In theory, this pool should have a capacity: 32.74 TB
But the ceph df command returns only: 22.4 TB (USED + MAX AVAIL)(16.7 + 5.7)
How to explain this difference?
ceph version is: 12.2.12-pve1
ceph df command out:
POOLS:
NAME ID QUOTA OBJECTS QUOTA BYTES USED %USED MAX AVAIL OBJECTS DIRTY READ WRITE RAW USED
ala01vf01p01 7 N/A N/A 16.7TiB 74.53 5.70TiB 4411119 4.41M 2.62GiB 887MiB 33.4TiB
NAME ID QUOTA OBJECTS QUOTA BYTES USED %USED MAX AVAIL OBJECTS DIRTY READ WRITE RAW USED
ala01vf01p01 7 N/A N/A 16.7TiB 74.53 5.70TiB 4411119 4.41M 2.62GiB 887MiB 33.4TiB
crush map:
host n01vf01 {
id -3 # do not change unnecessarily
id -4 class hdd # do not change unnecessarily
id -18 class nvme # do not change unnecessarily
# weight 22.014
alg straw2
hash 0 # rjenkins1
item osd.0 weight 3.669
item osd.13 weight 3.669
item osd.14 weight 3.669
item osd.15 weight 3.669
item osd.16 weight 3.669
item osd.17 weight 3.669
}
host n02vf01 {
id -5 # do not change unnecessarily
id -6 class hdd # do not change unnecessarily
id -19 class nvme # do not change unnecessarily
# weight 22.014
alg straw2
hash 0 # rjenkins1
item osd.1 weight 3.669
item osd.8 weight 3.669
item osd.9 weight 3.669
item osd.10 weight 3.669
item osd.11 weight 3.669
item osd.12 weight 3.669
}
host n04vf01 {
id -34 # do not change unnecessarily
id -35 class hdd # do not change unnecessarily
id -36 class nvme # do not change unnecessarily
# weight 22.014
alg straw2
hash 0 # rjenkins1
item osd.7 weight 3.669
item osd.27 weight 3.669
item osd.24 weight 3.669
item osd.25 weight 3.669
item osd.26 weight 3.669
item osd.28 weight 3.669
}
root default {
id -1 # do not change unnecessarily
id -2 class hdd # do not change unnecessarily
id -21 class nvme # do not change unnecessarily
# weight 66.042
alg straw2
hash 0 # rjenkins1
item n01vf01 weight 22.014
item n02vf01 weight 22.014
item n04vf01 weight 22.014
}
id -3 # do not change unnecessarily
id -4 class hdd # do not change unnecessarily
id -18 class nvme # do not change unnecessarily
# weight 22.014
alg straw2
hash 0 # rjenkins1
item osd.0 weight 3.669
item osd.13 weight 3.669
item osd.14 weight 3.669
item osd.15 weight 3.669
item osd.16 weight 3.669
item osd.17 weight 3.669
}
host n02vf01 {
id -5 # do not change unnecessarily
id -6 class hdd # do not change unnecessarily
id -19 class nvme # do not change unnecessarily
# weight 22.014
alg straw2
hash 0 # rjenkins1
item osd.1 weight 3.669
item osd.8 weight 3.669
item osd.9 weight 3.669
item osd.10 weight 3.669
item osd.11 weight 3.669
item osd.12 weight 3.669
}
host n04vf01 {
id -34 # do not change unnecessarily
id -35 class hdd # do not change unnecessarily
id -36 class nvme # do not change unnecessarily
# weight 22.014
alg straw2
hash 0 # rjenkins1
item osd.7 weight 3.669
item osd.27 weight 3.669
item osd.24 weight 3.669
item osd.25 weight 3.669
item osd.26 weight 3.669
item osd.28 weight 3.669
}
root default {
id -1 # do not change unnecessarily
id -2 class hdd # do not change unnecessarily
id -21 class nvme # do not change unnecessarily
# weight 66.042
alg straw2
hash 0 # rjenkins1
item n01vf01 weight 22.014
item n02vf01 weight 22.014
item n04vf01 weight 22.014
}
rule replicated_rule {
id 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}
id 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx