Ceph scalar & replicas performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,
I have some problem after my scalar performance test !!

Setup:
Linux kernel: 3.2.0
OS: Ubuntu 12.04
Storage server : 11 HDD (each storage server has 11 osd, 7200 rpm, 1T) + 10GbE NIC + RAID card: LSI MegaRAID SAS 9260-4i
         For every HDD: RAID0, Write Policy: Write Back with BBU, Read Policy: ReadAhead, IO Policy: Direct Storage server number : 1 to 4

Ceph version : 0.48.2
Replicas : 2

FIO cmd:  
[Sequencial Read]
fio --iodepth = 32 --numjobs=1 --runtime=120  --bs = 65536 --rw = read --ioengine=libaio --group_reporting --direct=1 --eta=always --ramp_time=10 --thinktime=10 

[Sequencial Read]
fio --iodepth = 32 --numjobs=1 --runtime=120  --bs = 65536 --rw = write --ioengine=libaio --group_reporting --direct=1 --eta=always --ramp_time=10 --thinktime=10 

[Random Read]
fio --iodepth = 32 --numjobs=8 --runtime=120  --bs = 65536 --rw = randread --ioengine=libaio --group_reporting --direct=1 --eta=always --ramp_time=10 --thinktime=10 

[Random Write]
fio --iodepth = 32 --numjobs=8 --runtime=120  --bs = 65536 --rw = randwrite --ioengine=libaio --group_reporting --direct=1 --eta=always --ramp_time=10 --thinktime=10

Use ceph client then create 1T RBD image for testing, the client also has 10GbE NIC , Linux kernel 3.2.0 , Ubuntu 12.04

Performance result: 
                      Bandwidth (MB/sec) 
┌────────────────────────────────────────
│storage server number│Sequential Read │Sequential Write│Random Read│Random Write │ 
├───────── ┼──────────────────────────────
│          1        │      259     │     76       │    837    │    26       │
├───────── ┼──────────────────────────────
│          2        │      349     │    121       │    950    │    45       │
├───────── ┼──────────────────────────────
│          3        │      354     │    108       │    490    │    71       │
├───────── ┼──────────────────────────────
│          4        │      338     │    103       │    610    │    89       │
├───────── ┼──────────────────────────────

We expect that bandwidth will increase when storage server increase under all case, but the result is not !! 
Can you share your idea for read/write bandwidth when storage server increasing ?

In another case, we fixed use 4 storage servers then adjust the number of replicas 2 to 4

Performance result:

                        Bandwidth (MB/sec) 
┌────────────────────────────────────────
│  replicas number    │Sequential Read │Sequential Write│Random Read│Random Write │
├───────── ┼──────────────────────────────
│          2        │       338    │      103     │     614    │      89     │
├───────── ┼──────────────────────────────
│          3        │       337    │      76      │     791    │      62     │
├───────── ┼──────────────────────────────
│          4        │       337    │      60      │     754    │      43     │
├───────── ┼──────────────────────────────

The bandwidth of write will decrease when replicas increase that is easy to know, but why read bandwidth did not increase?


Kelvin

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux