Bad performance when two fio write to the same image

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Guys

I am testing the performance of Jewel (10.2.2) with FIO, but found the performance would drop dramatically when two process write to the same image.

My environment:

1.       Server:

One mon and four OSDs running on the same server.

Intel P3700 400GB SSD which have 4 partitions, and each for one osd journal (journal size is 10GB)

Inter P3700 400GB SSD which have 4 partitions, and each format to XFS for one osd data (each data is 90GB)

10GB network

CPU: Intel(R) Xeon(R) CPU E5-2660  (it is not the bottleneck)

Memory: 256GB (it is not the bottleneck)

2.       Client

10GB network

CPU: Intel(R) Xeon(R) CPU E5-2660  (it is not the bottleneck)

Memory: 256GB (it is not the bottleneck)

3.       Ceph

Default configuration expect use async messager (have tried simple messager, got nearly the same result)

10GB image with 256 pg num

Test Case

1.       One Fio process: bs 4KB; iodepth 256; direct 1; ioengine rbd; randwrite

The performance is nearly 60MB/s and IOPS is nearly 15K

Four osd are nearly the same busy

2.       Two Fio process: bs 4KB; iodepth 256; direct 1; ioengine rbd; randwrite (write to the same image)

The performance is nearly 4MB/s each, and IOPS is nearly 1.5K each   Terrible

And I found that only one osd is busy, the other three are much more idle on CPU

And I also run FIO on two clients, the same result

3.       Two Fio process: bs 4KB; iodepth 256; direct 1; ioengine rbd randwrite (one to image1, one to image2)

The performance is nearly 35MB/s each and IOPS is nearly 8.5K each Reasonable

Four osd are nearly the same busy

 

 

Could someone help to explain the reason of TEST 2

 

Thanks

Email Disclaimer & Confidentiality Notice

This message is confidential and intended solely for the use of the recipient to whom they are addressed. If you are not the intended recipient you should not deliver, distribute or copy this e-mail. Please notify the sender immediately by e-mail and delete this e-mail from your system. Copyright © 2016 by Istuary Innovation Labs, Inc. All rights reserved. 

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux