Dear all,
I am currently running cephfs directly consuming erasure coded pools on a small test system (4 hosts, ceph osd tree below).
I followed the steps mentioned here: http://ceph.com/community/new-luminous-erasure-coding-rbd-cephfs/
I am running luminous 12.2.2 with bluestore.
During fio tests I experience extremely high memory usage, resulting in mons running oom. With replicated pool only, I experience no problems.
Is this a known behaviour ? If you need additional info, let me know.
I followed the steps mentioned here: http://ceph.com/community/new-luminous-erasure-coding-rbd-cephfs/
I am running luminous 12.2.2 with bluestore.
During fio tests I experience extremely high memory usage, resulting in mons running oom. With replicated pool only, I experience no problems.
Is this a known behaviour ? If you need additional info, let me know.
hosts have dual core cpus and 4 GB memory each.
ec profile:
k=2
m=1
k=2
m=1
ceph -s
health: HEALTH_OK
services:
mon: 3 daemons, quorum ctest01,ctest02,ctest03
mgr: ctest02(active), standbys: ctest01
mds: cephfs-1/1/1 up {0=ctest02=up:active}
osd: 8 osds: 8 up, 8 in
data:
pools: 4 pools, 384 pgs
objects: 22 objects, 3383 kB
usage: 13896 MB used, 1830 GB / 1843 GB avail
pgs: 384 active+clean
ceph fs ls:
name: cephfs, metadata pool: cephfs_metadata, data pools: [replicated ec21pool ]
name: cephfs, metadata pool: cephfs_metadata, data pools: [replicated ec21pool ]
ceph osd tree:
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 1.80034 root default
-9 0.56378 host ctest01
1 hdd 0.29099 osd.1 up 1.00000 1.00000
7 hdd 0.27280 osd.7 up 1.00000 1.00000
-7 0.72749 host ctest02
0 hdd 0.27280 osd.0 up 1.00000 1.00000
2 hdd 0.45470 osd.2 up 1.00000 1.00000
-6 0.36368 host ctest03
3 hdd 0.29099 osd.3 up 1.00000 1.00000
8 hdd 0.07269 osd.8 up 1.00000 1.00000
-8 0.14539 host ctest04
4 hdd 0.07269 osd.4 up 1.00000 1.00000
5 hdd 0.07269 osd.5 up 1.00000 1.00000
-1 1.80034 root default
-9 0.56378 host ctest01
1 hdd 0.29099 osd.1 up 1.00000 1.00000
7 hdd 0.27280 osd.7 up 1.00000 1.00000
-7 0.72749 host ctest02
0 hdd 0.27280 osd.0 up 1.00000 1.00000
2 hdd 0.45470 osd.2 up 1.00000 1.00000
-6 0.36368 host ctest03
3 hdd 0.29099 osd.3 up 1.00000 1.00000
8 hdd 0.07269 osd.8 up 1.00000 1.00000
-8 0.14539 host ctest04
4 hdd 0.07269 osd.4 up 1.00000 1.00000
5 hdd 0.07269 osd.5 up 1.00000 1.00000
Cheers,
Markus
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com