I have a new deployment and it always has this problem even if I increase the size of the OSD, it stays at 8. I saw examples where others had this problem but it was with the RBD pool, I don't have an RBD pool, and just deployed it fresh with ansible.
health: HEALTH_WARN
1 MDSs report slow metadata IOs
Reduced data availability: 16 pgs inactive
Degraded data redundancy: 16 pgs undersized
too few PGs per OSD (16 < min 30)
1 MDSs report slow metadata IOs
Reduced data availability: 16 pgs inactive
Degraded data redundancy: 16 pgs undersized
too few PGs per OSD (16 < min 30)
data:
pools: 2 pools, 16 pgs
objects: 0 objects, 0 B
usage: 2.0 GiB used, 39 GiB / 41 GiB avail
pgs: 100.000% pgs not active
16 undersized+peered
pools: 2 pools, 16 pgs
objects: 0 objects, 0 B
usage: 2.0 GiB used, 39 GiB / 41 GiB avail
pgs: 100.000% pgs not active
16 undersized+peered
# ceph osd pool ls
cephfs_data
cephfs_metadata
cephfs_data
cephfs_metadata
# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.03989 root default
-3 0.03989 host mytesthost104
0 hdd 0.03989 osd.0 up 1.00000 1.00000
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.03989 root default
-3 0.03989 host mytesthost104
0 hdd 0.03989 osd.0 up 1.00000 1.00000
# ceph osd pool set cephfs_data pgp_num 64
Error EINVAL: specified pgp_num 64 > pg_num 8
Error EINVAL: specified pgp_num 64 > pg_num 8
# ceph osd pool set cephfs_data pgp_num 256
Error EINVAL: specified pgp_num 256 > pg_num 8
Error EINVAL: specified pgp_num 256 > pg_num 8
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com