Hi Irek,
Thanks for the link. I have removed the SSD's for now and performance is up to 30MB/s on a benchmark now. To be honest, I new the Samsung SSD weren't great but did not expect them to be worse then just plain hard disks.
Pieter
On Aug 12, 2015, at 01:09 PM, Irek Fasikhov <malmyzh@xxxxxxxxx> wrote:
С уважением, Фасихов Ирек НургаязовичМоб.: +792290457572015-08-12 14:52 GMT+03:00 Pieter Koorts <pieter.koorts@xxxxxx>:HiSomething that's been bugging me for a while is I am trying to diagnose iowait time within KVM guests. Guests doing reads or writes tend do about 50% to 90% iowait but the host itself is only doing about 1% to 2% iowait. So the result is the guests are extremely slow.I currently run 3x hosts each with a single SSD and single HDD OSD in cache-teir writeback mode. Although the SSD (Samsung 850 EVO 120GB) is not a great one it should at least perform reasonably compared to a hard disk and doing some direct SSD tests I get approximately 100MB/s write and 200MB/s read on each SSD.When I run rados bench though, the benchmark starts with a not great but okay speed and as the benchmark progresses it just gets slower and slower till it's worse than a USB hard drive. The SSD cache pool is 120GB in size (360GB RAW) and in use at about 90GB. I have tried tuning the XFS mount options as well but it has had little effect.Understandably the server spec is not great but I don't expect performance to be that bad.OSD config:[osd]osd crush update on start = falseosd mount options xfs = "rw,noatime,inode64,logbsize=256k,delaylog,allocsize=4M"Servers spec:Dual Quad Core XEON E5410 and 32GB RAM in each server10GBE @ 10G speed with 8000byte Jumbo Frames.Rados bench result: (starts at 50MB/s average and plummets down to 11MB/s)sudo rados bench -p rbd 50 write --no-cleanup -t 1Maintaining 1 concurrent writes of 4194304 bytes for up to 50 seconds or 0 objectsObject prefix: benchmark_data_osc-mgmt-1_10007sec Cur ops started finished avg MB/s cur MB/s last lat avg lat0 0 0 0 0 0 - 01 1 14 13 51.9906 52 0.0671911 0.0746612 1 27 26 51.9908 52 0.0631836 0.07511523 1 37 36 47.9921 40 0.0691167 0.08024254 1 51 50 49.9922 56 0.0816432 0.07958695 1 56 55 43.9934 20 0.208393 0.0885236 1 61 60 39.994 20 0.241164 0.09991797 1 64 63 35.9934 12 0.239001 0.1065778 1 66 65 32.4942 8 0.214354 0.1227679 1 72 71 31.55 24 0.132588 0.12543810 1 77 76 30.3948 20 0.256474 0.12854811 1 79 78 28.3589 8 0.183564 0.13835412 1 82 81 26.9956 12 0.345809 0.14552313 1 85 84 25.842 12 0.373247 0.15129114 1 86 85 24.2819 4 0.950586 0.16069415 1 86 85 22.6632 0 - 0.16069416 1 90 89 22.2466 8 0.204714 0.17835217 1 94 93 21.8791 16 0.282236 0.18057118 1 98 97 21.5524 16 0.262566 0.18374219 1 101 100 21.0495 12 0.357659 0.18747720 1 104 103 20.597 12 0.369327 0.19247921 1 105 104 19.8066 4 0.373233 0.19421722 1 105 104 18.9064 0 - 0.19421723 1 106 105 18.2582 2 2.35078 0.21475624 1 107 106 17.6642 4 0.680246 0.21914725 1 109 108 17.2776 8 0.677688 0.22922226 1 113 112 17.2283 16 0.29171 0.23048727 1 117 116 17.1828 16 0.255915 0.23110128 1 120 119 16.9976 12 0.412411 0.23512229 1 120 119 16.4115 0 - 0.23512230 1 120 119 15.8645 0 - 0.23512231 1 120 119 15.3527 0 - 0.23512232 1 122 121 15.1229 2 0.319309 0.26282233 1 124 123 14.9071 8 0.344094 0.26620134 1 127 126 14.8215 12 0.33534 0.26791335 1 129 128 14.6266 8 0.355403 0.26924136 1 132 131 14.5536 12 0.581528 0.27432737 1 132 131 14.1603 0 - 0.27432738 1 133 132 13.8929 2 1.43621 0.2831339 1 134 133 13.6392 4 0.894817 0.28772940 1 134 133 13.2982 0 - 0.28772941 1 135 134 13.0714 2 1.87878 0.29960242 1 138 137 13.0459 12 0.309637 0.30460143 1 140 139 12.9285 8 0.302935 0.30449144 1 141 140 12.7256 4 1.5538 0.31341545 1 142 141 12.5317 4 0.352417 0.31369146 1 145 144 12.5201 12 0.322063 0.31745847 1 145 144 12.2537 0 - 0.31745848 1 145 144 11.9984 0 - 0.31745849 1 145 144 11.7536 0 - 0.31745850 1 146 145 11.5985 1 3.79816 0.341463
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com