Re: Ceph RBD, MySQL write IOPs - what is possible?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jun 7, 2024 at 1:22 PM Mark Lehrer <lehrer@xxxxxxxxx> wrote:
>
> > server RAM and CPU
> > * osd_memory_target
> > * OSD drive model
>
> Thanks for the reply.  The servers have dual Xeon Gold 6154 CPUs with
> 384 GB.  The drives are older, first gen NVMe - WDC SN620.
> osd_memory_target is at the default.  Mellanox CX5 and SN2700
> hardware.  The test client is a similar machine with no drives.
>
> The CPUs are 80% idle during the test.  The OSDs (according to iostat)
> hover around 50% util during the test and are close to 0 at other
> times.
>
...
> > > I get about 2000 IOPs with this test:
> > >
> > > # rados bench -p volumes 10 write -t 8 -b 16K
> > > hints = 1
> > > Maintaining 8 concurrent writes of 16384 bytes to objects of size
> > > 16384 for up to 10 seconds or 0 objects
> > > Object prefix: benchmark_data_fstosinfra-5_3652583
> > >  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
> > >    0       0         0         0         0         0           -           0
> > >    1       8      2050      2042   31.9004   31.9062  0.00247633  0.00390848
> > >    2       8      4306      4298   33.5728     35.25  0.00278488  0.00371784
> > >    3       8      6607      6599   34.3645   35.9531  0.00277546  0.00363139
> > >    4       7      8951      8944   34.9323   36.6406  0.00414908  0.00357249
> > >    5       8     11292     11284    35.257   36.5625  0.00291434  0.00353997
> > >    6       8     13588     13580   35.3588    35.875  0.00306094  0.00353084
> > >    7       7     15933     15926   35.5432   36.6562  0.00308388   0.0035123
> > >    8       8     18361     18353   35.8399   37.9219  0.00314996  0.00348327
> > >    9       8     20629     20621   35.7947   35.4375  0.00352998   0.0034877
> > >   10       5     23010     23005   35.9397     37.25  0.00395566  0.00347376
> > > Total time run:         10.003
> > > Total writes made:      23010
> > > Write size:             16384
> > > Object size:            16384
> > > Bandwidth (MB/sec):     35.9423
> > > Stddev Bandwidth:       1.63433
> > > Max bandwidth (MB/sec): 37.9219
> > > Min bandwidth (MB/sec): 31.9062
> > > Average IOPS:           2300
> > > Stddev IOPS:            104.597
> > > Max IOPS:               2427
> > > Min IOPS:               2042
> > > Average Latency(s):     0.0034737
> > > Stddev Latency(s):      0.00163661
> > > Max latency(s):         0.115932
> > > Min latency(s):         0.00179735
> > > Cleaning up (deleting benchmark objects)
> > > Removed 23010 objects
> > > Clean up completed and total clean up time :7.44664

Not the most helpful response, but on a (admittedly well-tuned)
cluster of 3x Intel Atom (C3758) nodes and 10Gbe networking and 2x
S4510 (SATA) SSD per node, I get this with the same rados bench run:
Total time run:         10.0015
Total writes made:      35931
Write size:             16384
Object size:            16384
Bandwidth (MB/sec):     56.1335
Stddev Bandwidth:       0.883058
Max bandwidth (MB/sec): 58.0625
Min bandwidth (MB/sec): 55.0469
Average IOPS:           3592
Stddev IOPS:            56.5157
Max IOPS:               3716
Min IOPS:               3523
Average Latency(s):     0.00222274
Stddev Latency(s):      0.000894184
Max latency(s):         0.016538
Min latency(s):         0.00117819

Given that you have Xeon class processors, it would seem something is
very, very wrong here with your configuration. Anthony is asking a lot
of the right questions, below and I would recommend following up with
all of them.

Cheers,
Tyler
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux