hello I mean a filesystem mounted on top of a mapped rbd rbd create --size=10G kube/bench rbd feature disable kube/bench object-map fast-diff deep-flatten rbd map bench --pool kube --name client.admin /sbin/mkfs.ext4 /dev/rbd/kube/bench mount /dev/rbd/kube/bench /mnt/ cd /mnt/ about the bench I did. I try to get apples in both side. (I hope) : block size : 4k thread : 1 size of data : 1G Writes are great. rbd -p kube bench kube/bench --io-type write --io-threads 1 --io-total 1G --io-pattern seq elapsed: 12 ops: 262144 ops/sec: 20758.70 bytes/sec: 85027625.70 rbd -p kube bench kube/bench --io-type write --io-threads 1 --io-total 10G --io-pattern rand elapsed: 14 ops: 262144 ops/sec: 17818.16 bytes/sec: 72983201.32 Reads are very very slow : rbd -p kube bench kube/bench --io-type read --io-threads 1 --io-total 1G --io-pattern rand elapsed: 445 ops: 81216 ops/sec: 182.37 bytes/sec: 747006.15 rbd -p kube bench kube/bench --io-type read --io-threads 1 --io-total 1G --io-pattern seq elapsed: 14 ops: 14153 ops/sec: 957.57 bytes/sec: 3922192.15 Perhaps I'm hitting this 'issue' : http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-August/028878.html For the record : I got an old cluster with vm in ceph 10.2.11. With a dd bench I reach the cluster limitation. dd if=/dev/zero of=test bs=4M count=250 oflag=direct 1048576000 bytes (1.0 GB) copied, 11.5469 s, 90.8 MB/s and pgbench gives me 200 transaction per second On the new cluster with containers running on fs on top of a mapped rbd and ceph nautilus I got : dd if=/dev/zero of=test bs=4M count=250 oflag=direct 1048576000 bytes (1.0 GB, 1000 MiB) copied, 27.0351 s, 38.8 MB/s and pgbench gives me 10 transactions per second. something it not ok somewhere :) oau Le mercredi 14 août 2019 à 15:56 +0200, Ilya Dryomov a écrit :
|
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx