Re: Performance doesn't scale well on a full ssd cluster.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



At least historically, high CPU usage and likely context switching and lock contention have been the limiting factor during high IOPS workloads on the test hardware at Inktank (and now RH). I ran benchmarks with a parametric sweep of ceph parameters a while back on SSDs to see if changing any of our tuning parameters had any unexpected benefits. I didn't really see any obviously universally beneficial changes to our default tunings. It's possible this has changed though with the work that Somnath and others have been doing for Giant though.

Mark

On 10/17/2014 05:13 AM, Mark Wu wrote:
The client doesn't hit any bottleneck.  I also tried to run multiple
clients on different host. There's no change.

2014-10-17 14:36 GMT+08:00 Alexandre DERUMIER <aderumier@xxxxxxxxx
<mailto:aderumier@xxxxxxxxx>>:

    Hi,
    >>Thanks for the detailed information. but I am already using fio with rbd engine. Almost 4 volumes can reach the peak.

    What is your cpu usage of fio-rbd ?
    Myself I'm cpu bound on 8cores with around 40000iops read 4K.



    ----- Mail original -----

    De: "Mark Wu" <wudx05@xxxxxxxxx <mailto:wudx05@xxxxxxxxx>>
    À: "Daniel Schwager" <Daniel.Schwager@xxxxxxxx
    <mailto:Daniel.Schwager@xxxxxxxx>>
    Cc: ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
    Envoyé: Jeudi 16 Octobre 2014 19:19:17
    Objet: Re:  Performance doesn't scale well on a full ssd
    cluster.



    Thanks for the detailed information. but I am already using fio with
    rbd engine. Almost 4 volumes can reach the peak.
    2014 年 10 月 17 日 上午 1:03于 wudx05@xxxxxxxxx
    <mailto:wudx05@xxxxxxxxx> 写道:



    Thanks for the detailed information. but I am already using fio with
    rbd engine. Almost 4 volumes can reach the peak.
    2014 年 10 月 17 日 上午 12:55于 "Daniel Schwager" <
    Daniel.Schwager@xxxxxxxx <mailto:Daniel.Schwager@xxxxxxxx> >写道:

    <blockquote>



    Hi Mark,

    maybe you will check rbd-enabled fio
    http://telekomcloud.github.io/ceph/2014/02/26/ceph-performance-analysis_fio_rbd.html

    yum install ceph-devel
    git clone git:// git.kernel.dk/fio.git <http://git.kernel.dk/fio.git>
    cd fio ; ./configure ; make -j5 ; make install

    Setup the number of jobs (==clients) inside fio config to
    numjobs=8
    for simulating multiple clients.


    regards
    Danny


    my test.fio:

    [global]
    #logging
    #write_iops_log=write_iops_log
    #write_bw_log=write_bw_log
    #write_lat_log=write_lat_log
    ioengine=rbd
    clientname=admin
    pool=rbd
    rbdname=myimage
    invalidate=0 # mandatory
    rw=randwrite
    bs=1m
    runtime=120
    iodepth=8
    numjobs=8

    time_based
    #direct=0


    [seq-write]
    stonewall
    rw=write

    #[seq-read]
    #stonewall
    #rw=read


    </blockquote>

    _______________________________________________
    ceph-users mailing list
    ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux