I have tried to create erasure pools for CephFS using the examples given
at
https://swamireddy.wordpress.com/2016/01/26/ceph-diff-between-erasure-and-replicated-pool-type/
but this is resulting in some weird behaviour. The only number in
common is that when creating the metadata store; is this related?
[ceph@thor ~]$ ceph -s
cluster:
id: b688f541-9ad4-48fc-8060-803cb286fc38
health: HEALTH_WARN
Reduced data availability: 128 pgs inactive, 128 pgs incomplete
services:
mon: 3 daemons, quorum thor,odin,loki
mgr: odin(active), standbys: loki, thor
mds: cephfs-1/1/1 up {0=thor=up:active}, 1 up:standby
osd: 5 osds: 5 up, 5 in
data:
pools: 2 pools, 256 pgs
objects: 21 objects, 2.19KiB
usage: 5.08GiB used, 7.73TiB / 7.73TiB avail
pgs: 50.000% pgs not active
128 creating+incomplete
128 active+clean
Pretty sure these were the commands used.
ceph osd pool create storage 1024 erasure ec-42-profile2
ceph osd pool create storage 128 erasure ec-42-profile2
ceph fs new cephfs storage_metadata storage
ceph osd pool create storage_metadata 128
ceph fs new cephfs storage_metadata storage
ceph fs add_data_pool cephfs storage
ceph osd pool set storage allow_ec_overwrites true
ceph osd pool application enable storage cephfs
fs add_data_pool default storage
ceph fs add_data_pool cephfs storage
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com