Ceph luminous - DELL R620 - performance expectations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I have been trying to asses performance on a 3 servers cluster for few days now 

The most I got you can see below ( 244 MBs for 3 OSDs and 15GB SSD partition)
With 3 servers wouldn't make sense to get almost 3 times the speed of the hard drive ?


Questions:
- what should be the performance I am aiming for ?
- how should I configure SSD cache/controller ?
- how should I configure Harddisk cache/controllers ?
- any other performance tweaks except a tuned sysctl.conf ?
- how does it scale ( i.e 3 servers 600 MBs, 4 servers 800 MBs assuming one hard drive per server ...)
 - is there a formula / rule of thumb approach to estimate performance 
  ( e.g, if I want to create a SSD only pool with one drive per server , what should I expect ??)


MANY THANKS !!!

Configuration
  kernel  3.10.0-693.11.6.el7.x86_64
  bonded 10GB , ixgbe 5.4
  PERC H710 Mini
 /dev/sda is the SSD ( Toshiba 400 GB PX04SHB040 )
/dev/sd[b-f] are 10K Entreprise ( Toshiba 600GB AL13SEB600 )

  [root@osd01 ~]#  megacli -LDGetProp  -Cache -LALL -a0

Adapter 0-VD 0(target id: 0): Cache Policy:WriteThrough, ReadAheadNone, Direct, No Write Cache if bad BBU
Adapter 0-VD 1(target id: 1): Cache Policy:WriteBack, ReadAdaptive, Direct, No Write Cache if bad BBU
Adapter 0-VD 2(target id: 2): Cache Policy:WriteBack, ReadAdaptive, Direct, No Write Cache if bad BBU
Adapter 0-VD 3(target id: 3): Cache Policy:WriteBack, ReadAdaptive, Direct, No Write Cache if bad BBU
Adapter 0-VD 4(target id: 4): Cache Policy:WriteBack, ReadAdaptive, Direct, No Write Cache if bad BBU
Adapter 0-VD 5(target id: 5): Cache Policy:WriteBack, ReadAdaptive, Direct, No Write Cache if bad BBU

Exit Code: 0x00
[root@osd01 ~]#  megacli -LDGetProp  -DskCache -LALL -a0

Adapter 0-VD 0(target id: 0): Disk Write Cache : Disabled
Adapter 0-VD 1(target id: 1): Disk Write Cache : Disk's Default
Adapter 0-VD 2(target id: 2): Disk Write Cache : Disk's Default
Adapter 0-VD 3(target id: 3): Disk Write Cache : Disk's Default
Adapter 0-VD 4(target id: 4): Disk Write Cache : Disk's Default
Adapter 0-VD 5(target id: 5): Disk Write Cache : Disk's Default

 hdparm -tT /dev/sda

/dev/sda:
 Timing cached reads:   20628 MB in  1.99 seconds = 10355.20 MB/sec
 Timing buffered disk reads: 1610 MB in  3.00 seconds = 536.23 MB/sec
[root@osd01 ~]# hdparm -tT /dev/sdb

/dev/sdb:
 Timing cached reads:   19940 MB in  1.99 seconds = 10009.61 MB/sec
 Timing buffered disk reads: 602 MB in  3.01 seconds = 200.27 MB/sec



 /sys/block/sd* settings
   read_ahead_kb = 4096
   scheduler = deadline


 egrep -v "^#|^$" /etc/sysctl.conf
net.ipv4.tcp_sack = 0
net.core.netdev_budget = 600
net.ipv4.tcp_window_scaling = 1
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.rmem_default = 16777216
net.core.wmem_default = 16777216
net.core.optmem_max = 40960
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_syncookies = 0
net.core.somaxconn = 1024
net.core.netdev_max_backlog = 20000
net.ipv4.tcp_max_syn_backlog = 30000
net.ipv4.tcp_max_tw_buckets = 2000000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_slow_start_after_idle = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.all.accept_source_route = 0
vm.min_free_kbytes = 262144
vm.swappiness = 0
vm.vfs_cache_pressure = 100
fs.suid_dumpable = 0
kernel.core_uses_pid = 1
kernel.msgmax = 65536
kernel.msgmnb = 65536
kernel.randomize_va_space = 1
kernel.sysrq = 0
kernel.pid_max = 4194304
fs.file-max = 100000


rados bench -p scbench 300 write --no-cleanup && rados bench -p scbench 300 seq

 

Total time run:         311.478719

Total writes made:      8983

Write size:             4194304

Object size:            4194304

Bandwidth (MB/sec):     115.359

Stddev Bandwidth:       67.5735

Max bandwidth (MB/sec): 244

Min bandwidth (MB/sec): 0

Average IOPS:           28

Stddev IOPS:            16

Max IOPS:               61

Min IOPS:               0

Average Latency(s):     0.554779

Stddev Latency(s):      1.57807

Max latency(s):         21.0212

Min latency(s):         0.00805304

 

 

Total time run:       303.082321

Total reads made:     2558

Read size:            4194304

Object size:          4194304

Bandwidth (MB/sec):   33.7598

Average IOPS:         8

Stddev IOPS:          9

Max IOPS:             38

Min IOPS:             0

Average Latency(s):   1.89518

Max latency(s):       52.1244

Min latency(s):       0.0191481

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux