Questions about r/w low performance on ceph pacific vs ceph luminous

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey,

We saw that the performance of rbd disk image IOPS over Ceph-Pacific is much slower than the rbd disk image IOPS over Ceph-Luminous.

We performed a simple dd write test with the following results:
**the hardware and the osd layout is the same on both environments

Writing to rbd image (via vm on opesntack) on luminous:
[root@noam-test-storagebm-0 shai]# dd if=/dev/zero of=testfile bs=1M count=1024 oflag=direct
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.19544 s, 489 MB/s

Writing to rbd image (pv mounted to k8s pod) on pacific:
[root@noam-test-masterbm-0 noam (Active)]# dd if=/dev/zero of=testfile bs=1M count=1024 oflag=direct
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 5.51966 s, 195 MB/s

As you can see, the performance is almost 2.5 times as fast in luminous than in pacific.

We've noticed that if you change the oflag value from "direct" to "nocache", the speed is 4 times faster (writing to rbd image on pacific):
[root@noam-test-masterbm-0 noam (Active)]# dd if=/dev/zero of=testfile bs=1M count=1024 oflag=nocache
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.32078 s, 813 MB/s

We were wondering:

  1.  Why oflag=direct vs oflag=nocache has such a significant difference in speed? (when writing to rbd).
  2.  Why when using oflag=direct on both luminous and pacific has such a big difference in I/O performance? is it related to moving from ceph-disk to ceph-volume?
  3.  Is it related to cache bios/raid controller configuration?
  4.  Is there a documentation page about how to optimize read/write performance both for rbd and cephfs?

Best regards,
Shai

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux