Question on Sequential Write performance at 4K blocksize

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All,

 

Have a question on the performance of sequential write @ 4K block sizes.

 

Here is my configuration:

 

Ceph Cluster: 6 Nodes. Each node with :-

20x HDDs (OSDs) - 10K RPM 1.2 TB SAS disks

SSDs – 4x – Intel S3710, 400GB; for OSD journals shared across 20 HDDs (i.e., SSD journal ratio 1:5)

 

Network:

- Client network – 10Gbps

- Cluster network – 10Gbps

- Each node with dual NIC – Intel 82599 ES – driver version 4.0.1

 

Traffic generators:

2 client servers – running on dual Intel sockets with 16 physical cores (32 cores with hyper-threading enabled)

 

Test program:

FIO – sequential read/write; random read/write

Blocksizes – 4k, 32k, 256k…

FIO – Number of jobs = 32; IO depth = 64

Runtime = 10 minutes; Ramptime = 5 minutes

Filesize = 4096g (5TB)

 

I observe that my sequential write performance at 4K block size is very low – I am getting around 6MB/sec bandwidth.  The performance improves significantly at larger block sizes (shown below)

 

FIO – Sequential Write test

Block Size

Sequential Write Bandwidth KB/Sec

4K

5694

32K

141020

256K

747421

1024K

602236

4096K

683029

 

Here are my questions:

- Why is the sequential write performance at 4K block size so low? Is this in-line what others see?

- Is it because of less number of clients, i.e., traffic generators? I am planning to increase the number of clients to 4 servers.

- There is a later version on NIC driver from Intel, v4.3.15 – do you think upgrading to later version (v4.3.15) will improve performance?

 

Any thoughts or pointers will be helpful.

 

Thanks,

 

- epk


Legal Disclaimer:
The information contained in this message may be privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and delete or destroy any copy of this message!
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux