Re: RBD performance slowly degrades :-(

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Irek,

Thanks for the link. I have removed the SSD's for now and performance is up to 30MB/s on a benchmark now. To be honest, I new the Samsung SSD weren't great but did not expect them to be worse then just plain hard disks.

Pieter

On Aug 12, 2015, at 01:09 PM, Irek Fasikhov <malmyzh@xxxxxxxxx> wrote:

Hi.
Read this thread here:
https://www.mail-archive.com/ceph-users@xxxxxxxxxxxxxx/msg17360.html

С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757

2015-08-12 14:52 GMT+03:00 Pieter Koorts <pieter.koorts@xxxxxx>:
Hi

Something that's been bugging me for a while is I am trying to diagnose iowait time within KVM guests. Guests doing reads or writes tend do about 50% to 90% iowait but the host itself is only doing about 1% to 2% iowait. So the result is the guests are extremely slow.

I currently run 3x hosts each with a single SSD and single HDD OSD in cache-teir writeback mode. Although the SSD (Samsung 850 EVO 120GB) is not a great one it should at least perform reasonably compared to a hard disk and doing some direct SSD tests I get approximately 100MB/s write and 200MB/s read on each SSD.

When I run rados bench though, the benchmark starts with a not great but okay speed and as the benchmark progresses it just gets slower and slower till it's worse than a USB hard drive. The SSD cache pool is 120GB in size (360GB RAW) and in use at about 90GB. I have tried tuning the XFS mount options as well but it has had little effect.

Understandably the server spec is not great but I don't expect performance to be that bad.

OSD config:
[osd]
osd crush update on start = false
osd mount options xfs = "rw,noatime,inode64,logbsize=256k,delaylog,allocsize=4M"

Servers spec:
Dual Quad Core XEON E5410 and 32GB RAM in each server
10GBE @ 10G speed with 8000byte Jumbo Frames.

Rados bench result: (starts at 50MB/s average and plummets down to 11MB/s)
sudo rados bench -p rbd 50 write --no-cleanup -t 1
 Maintaining 1 concurrent writes of 4194304 bytes for up to 50 seconds or 0 objects
 Object prefix: benchmark_data_osc-mgmt-1_10007
   sec Cur ops   started  finished  avg MB/s  cur MB/s  last lat   avg lat
     0       0         0         0         0         0         -         0
     1       1        14        13   51.9906        52 0.0671911  0.074661
     2       1        27        26   51.9908        52 0.0631836 0.0751152
     3       1        37        36   47.9921        40 0.0691167 0.0802425
     4       1        51        50   49.9922        56 0.0816432 0.0795869
     5       1        56        55   43.9934        20  0.208393  0.088523
     6       1        61        60    39.994        20  0.241164 0.0999179
     7       1        64        63   35.9934        12  0.239001  0.106577
     8       1        66        65   32.4942         8  0.214354  0.122767
     9       1        72        71     31.55        24  0.132588  0.125438
    10       1        77        76   30.3948        20  0.256474  0.128548
    11       1        79        78   28.3589         8  0.183564  0.138354
    12       1        82        81   26.9956        12  0.345809  0.145523
    13       1        85        84    25.842        12  0.373247  0.151291
    14       1        86        85   24.2819         4  0.950586  0.160694
    15       1        86        85   22.6632         0         -  0.160694
    16       1        90        89   22.2466         8  0.204714  0.178352
    17       1        94        93   21.8791        16  0.282236  0.180571
    18       1        98        97   21.5524        16  0.262566  0.183742
    19       1       101       100   21.0495        12  0.357659  0.187477
    20       1       104       103    20.597        12  0.369327  0.192479
    21       1       105       104   19.8066         4  0.373233  0.194217
    22       1       105       104   18.9064         0         -  0.194217
    23       1       106       105   18.2582         2   2.35078  0.214756
    24       1       107       106   17.6642         4  0.680246  0.219147
    25       1       109       108   17.2776         8  0.677688  0.229222
    26       1       113       112   17.2283        16   0.29171  0.230487
    27       1       117       116   17.1828        16  0.255915  0.231101
    28       1       120       119   16.9976        12  0.412411  0.235122
    29       1       120       119   16.4115         0         -  0.235122
    30       1       120       119   15.8645         0         -  0.235122
    31       1       120       119   15.3527         0         -  0.235122
    32       1       122       121   15.1229         2  0.319309  0.262822
    33       1       124       123   14.9071         8  0.344094  0.266201
    34       1       127       126   14.8215        12   0.33534  0.267913
    35       1       129       128   14.6266         8  0.355403  0.269241
    36       1       132       131   14.5536        12  0.581528  0.274327
    37       1       132       131   14.1603         0         -  0.274327
    38       1       133       132   13.8929         2   1.43621   0.28313
    39       1       134       133   13.6392         4  0.894817  0.287729
    40       1       134       133   13.2982         0         -  0.287729
    41       1       135       134   13.0714         2   1.87878  0.299602
    42       1       138       137   13.0459        12  0.309637  0.304601
    43       1       140       139   12.9285         8  0.302935  0.304491
    44       1       141       140   12.7256         4    1.5538  0.313415
    45       1       142       141   12.5317         4  0.352417  0.313691
    46       1       145       144   12.5201        12  0.322063  0.317458
    47       1       145       144   12.2537         0         -  0.317458
    48       1       145       144   11.9984         0         -  0.317458
    49       1       145       144   11.7536         0         -  0.317458
    50       1       146       145   11.5985         1   3.79816  0.341463


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux