On Fri, Jun 29, 2018 at 10:01 AM Yu Haiyang <haiyangy@xxxxxxx> wrote: > > Ubuntu 16.04.3 LTS > 4.4 kernel? AIO on cephfs is not supported by 4.4 kernel, AIO actually is synchronized IO. 4.5 kernel is the first version that support AIO on cephfs > On Jun 28, 2018, at 9:00 PM, Yan, Zheng <ukernel@xxxxxxxxx> wrote: > > kernel version? > > On Thu, Jun 28, 2018 at 5:38 PM Yu Haiyang <haiyangy@xxxxxxx> wrote: >> >> Here you go. Below are the fio job options and the results. >> >> blocksize=4K >> size=500MB >> directory=[ceph_fs_mount_directory] >> ioengine=libaio >> iodepth=64 >> direct=1 >> runtime=60 >> time_based >> group_reporting >> >> numjobs Ceph FS Erasure Coding (k=2, m=1) Ceph FS 3 Replica >> 1 job 577KB/s 765KB/s >> 2 job 1.27MB/s 793KB/s >> 4 job 2.33MB/s 1.36MB/s >> 8 job 4.14MB/s 2.36MB/s >> 16 job 6.87MB/s 4.40MB/s >> 32 job 11.07MB/s 8.17MB/s >> 64 job 13.75MB/s 15.84MB/s >> 128 job 10.46MB/s 26.82MB/s >> >> On Jun 28, 2018, at 5:01 PM, Yan, Zheng <ukernel@xxxxxxxxx> wrote: >> >> On Thu, Jun 28, 2018 at 10:30 AM Yu Haiyang <haiyangy@xxxxxxx> wrote: >> >> >> Hi Yan, >> >> Thanks for your suggestion. >> No, I didn’t run fio on ceph-fuse. I mounted my Ceph FS in kernel mode. >> >> >> command option of fio ? >> >> Regards, >> Haiyang >> >> On Jun 27, 2018, at 9:45 PM, Yan, Zheng <ukernel@xxxxxxxxx> wrote: >> >> On Wed, Jun 27, 2018 at 8:04 PM Yu Haiyang <haiyangy@xxxxxxx> wrote: >> >> >> Hi All, >> >> Using fio with job number ranging from 1 to 128, the random write speed for 4KB block size has been consistently around 1MB/s to 2MB/s. >> Random read of the same block size can reach 60MB/s with 32 jobs. >> >> >> run fio on ceph-fuse? If I remember right, fio does 1 bytes write. >> overhead of passing the 1 byte to ceph-fuse is too high. >> >> >> Our ceph cluster consists of 4 OSDs all running on SSD connected through a switch with 9.06 Gbits/sec bandwidth. >> Any suggestion please? >> >> Warmest Regards, >> Haiyang >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> >> >> > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com