Re: Expected IO in luminous Ceph Cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sinan,

that would be great. The numbers should differ a lot, since you have an all flash pool, but it would be interesting, what we could expect from such a configuration.

Regards
Felix

-------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt
-------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------
 

Am 07.06.19, 12:02 schrieb "Sinan Polat" <sinan@xxxxxxxx>:

    Hi Felix,
    
    I can run your commands inside an OpenStack VM. Tthe storage cluster contains of 12 OSD servers, holding each 8x 960GB SSD. Luminous FileStore. Replicated 3.
    
    Would it help you to run your command on my cluster?
    
    Sinan
    
    > Op 7 jun. 2019 om 08:52 heeft Stolte, Felix <f.stolte@xxxxxxxxxxxxx> het volgende geschreven:
    > 
    > I have no performance data before we migrated to bluestore. You should start a separate topic regarding your question.
    > 
    > Could anyone with an more or less equally sized cluster post the output of a sysbench with the following parameters (either from inside an openstack vm or a mounted rbd)?
    > 
    > sysbench --file-fsync-freq=1 --threads=16 fileio --file-total-size=1G 
    >    --file-test-mode=rndrw --file-rw-ratio=2 prepare
    > 
    > sysbench --file-fsync-freq=1 --threads=16 fileio --file-total-size=1G 
    >    --file-test-mode=rndrw --file-rw-ratio=2 run
    > 
    > Thanks in advance.
    > 
    > Regards
    > Felix
    > 
    > -------------------------------------------------------------------------------------
    > -------------------------------------------------------------------------------------
    > Forschungszentrum Juelich GmbH
    > 52425 Juelich
    > Sitz der Gesellschaft: Juelich
    > Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
    > Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
    > Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
    > Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
    > Prof. Dr. Sebastian M. Schmidt
    > -------------------------------------------------------------------------------------
    > -------------------------------------------------------------------------------------
    > 
    > 
    > Am 06.06.19, 15:09 schrieb "Marc Roos" <M.Roos@xxxxxxxxxxxxxxxxx>:
    > 
    > 
    >    I am also thinking of moving the wal/db to ssd of the sata hdd's. Did 
    >    you do tests before and after this change, and know what the difference 
    >    is iops? And is the advantage more or less when your sata hdd's are 
    >    slower? 
    > 
    > 
    >    -----Original Message-----
    >    From: Stolte, Felix [mailto:f.stolte@xxxxxxxxxxxxx] 
    >    Sent: donderdag 6 juni 2019 10:47
    >    To: ceph-users
    >    Subject:  Expected IO in luminous Ceph Cluster
    > 
    >    Hello folks,
    > 
    >    we are running a ceph cluster on Luminous consisting of 21 OSD Nodes 
    >    with 9 8TB SATA drives and 3 Intel 3700 SSDs for Bluestore WAL and DB 
    >    (1:3 Ratio). OSDs have 10Gb for Public and Cluster Network. The cluster 
    >    is running stable for over a year. We didn’t had a closer look on IO 
    >    until one of our customers started to complain about a VM we migrated 
    >    from VMware with Netapp Storage to our Openstack Cloud with ceph 
    >    storage. He sent us a sysbench report from the machine, which I could 
    >    reproduce on other VMs as well as on a mounted RBD on physical hardware:
    > 
    >    sysbench --file-fsync-freq=1 --threads=16 fileio --file-total-size=1G 
    >    --file-test-mode=rndrw --file-rw-ratio=2 run sysbench 1.0.11 (using 
    >    system LuaJIT 2.1.0-beta3)
    > 
    >    Running the test with following options:
    >    Number of threads: 16
    >    Initializing random number generator from current time
    > 
    >    Extra file open flags: 0
    >    128 files, 8MiB each
    >    1GiB total file size
    >    Block size 16KiB
    >    Number of IO requests: 0
    >    Read/Write ratio for combined random IO test: 2.00 Periodic FSYNC 
    >    enabled, calling fsync() each 1 requests.
    >    Calling fsync() at the end of test, Enabled.
    >    Using synchronous I/O mode
    >    Doing random r/w test
    > 
    >    File operations:
    >        reads/s:                      36.36
    >        writes/s:                     18.18
    >        fsyncs/s:                     2318.59
    > 
    >    Throughput:
    >        read, MiB/s:                  0.57
    >        written, MiB/s:               0.28
    > 
    >    General statistics:
    >        total time:                          10.0071s
    >        total number of events:              23755
    > 
    >    Latency (ms):
    >             min:                                  0.01
    >             avg:                                  6.74
    >             max:                               1112.58
    >             95th percentile:                     26.68
    >             sum:                             160022.67
    > 
    >    Threads fairness:
    >        events (avg/stddev):           1484.6875/52.59
    >        execution time (avg/stddev):   10.0014/0.00
    > 
    >    Are these numbers reasonable for a cluster of our size?
    > 
    >    Best regards
    >    Felix
    >    IT-Services
    >    Telefon 02461 61-9243
    >    E-Mail: f.stolte@xxxxxxxxxxxxx
    >    ------------------------------------------------------------------------
    >    -------------
    >    ------------------------------------------------------------------------
    >    -------------
    >    Forschungszentrum Juelich GmbH
    >    52425 Juelich
    >    Sitz der Gesellschaft: Juelich
    >    Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 
    >    Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
    >    Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender), 
    >    Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, Prof. 
    >    Dr. Sebastian M. Schmidt
    >    ------------------------------------------------------------------------
    >    -------------
    >    ------------------------------------------------------------------------
    >    -------------
    > 
    > 
    >    _______________________________________________
    >    ceph-users mailing list
    >    ceph-users@xxxxxxxxxxxxxx
    >    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
    > 
    > 
    > 
    > 
    > _______________________________________________
    > ceph-users mailing list
    > ceph-users@xxxxxxxxxxxxxx
    > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
    
    

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux