Re: NVMe's

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the info!  Interesting numbers.  Probably not 60K client IOPs/OSD then, but the tp_osd_tp threads were probably working pretty hard under the combined client/recovery workload.


Mark


On 9/24/20 2:49 PM, Martin Verges wrote:
Hello,

It was some time ago but as far as I remember and found in the chat log, it was during backfill/recovery and high client workload and on Intel Xeon Silver 4110, 2.10GHz, 8C/16T Cpu. I found a screenshot in my chat history stating 775% and 722% cpu usage in htop for 2 OSDs (the server has 2 PCIe PM1725a NVMe OSDs and 12 HDD OSDs). Unfortunately I have no console log output that would show more details like IO pattern.

--
Martin Verges
Managing director

Mobile: +49 174 9335695
E-Mail: martin.verges@xxxxxxxx <mailto:martin.verges@xxxxxxxx>
Chat: https://t.me/MartinVerges

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263

Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx


Am Do., 24. Sept. 2020 um 21:01 Uhr schrieb Mark Nelson <mnelson@xxxxxxxxxx <mailto:mnelson@xxxxxxxxxx>>:

    Mind if I ask what size of IOs those where, what kind of IOs
    (reads/writes/sequential/random?) and what kind of cores?


    Mark


    On 9/24/20 1:43 PM, Martin Verges wrote:
    > I did not see 10 cores, but 7 cores per osd over a long period on
    > pm1725a disks with around 60k IO/s according to sysstat of each
    disk.
    >
    > --
    > Martin Verges
    > Managing director
    >
    > Mobile: +49 174 9335695
    > E-Mail: martin.verges@xxxxxxxx <mailto:martin.verges@xxxxxxxx>
    <mailto:martin.verges@xxxxxxxx <mailto:martin.verges@xxxxxxxx>>
    > Chat: https://t.me/MartinVerges
    >
    > croit GmbH, Freseniusstr. 31h, 81247 Munich
    > CEO: Martin Verges - VAT-ID: DE310638492
    > Com. register: Amtsgericht Munich HRB 231263
    >
    > Web: https://croit.io
    > YouTube: https://goo.gl/PGE1Bx
    >
    >
    > Am Do., 24. Sept. 2020 um 18:47 Uhr schrieb <vitalif@xxxxxxxxxx
    <mailto:vitalif@xxxxxxxxxx>
    > <mailto:vitalif@xxxxxxxxxx <mailto:vitalif@xxxxxxxxxx>>>:
    >
    >     OK, I'll retry my tests several times more.
    >
    >     But I've never seen OSD utilize 10 cores, so... I won't
    believe it
    >     until I see it myself on my machine. :-))
    >
    >     I tried a fresh OSD on a block ramdisk ("brd"), for example. It
    >     was eating 658% CPU and pushing only 4138 write iops...
    >     _______________________________________________
    >     ceph-users mailing list -- ceph-users@xxxxxxx
    <mailto:ceph-users@xxxxxxx>
    >     <mailto:ceph-users@xxxxxxx <mailto:ceph-users@xxxxxxx>>
    >     To unsubscribe send an email to ceph-users-leave@xxxxxxx
    <mailto:ceph-users-leave@xxxxxxx>
    >     <mailto:ceph-users-leave@xxxxxxx
    <mailto:ceph-users-leave@xxxxxxx>>
    >

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux