Re: Performance in Proof-of-Concept cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Hans,

Good suggestion; I also realized image01 was only 1G, so I created a new rbd "image100" of size 100GB. Size is 3:

rbd bench --io-type write --io-pattern rand image100 --pool=bench --io-size=4096 --io-total=1G --io-threads=16
bench  type write io_size 4096 io_threads 16 bytes 1073741824 pattern random
  SEC       OPS   OPS/SEC   BYTES/SEC
    1      2544   2559.98    10 MiB/s
    2      4208   2116.21   8.3 MiB/s
    3      5920   1981.29   7.7 MiB/s
    4      7712   1931.98   7.5 MiB/s
    5      9648   1932.78   7.5 MiB/s
    6     11680   1828.65   7.1 MiB/s
    7     13744   1907.18   7.4 MiB/s
    8     15792   1974.38   7.7 MiB/s
    9     18240   2107.27   8.2 MiB/s
   10     21200   2312.23   9.0 MiB/s
   11     24352   2534.38   9.9 MiB/s
   12     27984   2847.97    11 MiB/s
   13     32224   3286.37    13 MiB/s
   14     37344   3820.76    15 MiB/s
   15     42864    4329.3    17 MiB/s
   16     48672   4863.95    19 MiB/s
   17     55344   5471.95    21 MiB/s
   18     64352   6425.54    25 MiB/s
   19     78624   8255.92    32 MiB/s
   20    100176   11471.5    45 MiB/s
   21    127776   15820.7    62 MiB/s
   22    158720     20675    81 MiB/s
   23    190528     25235    99 MiB/s
   24    220592   28393.3   111 MiB/s
   25    248528   29670.1   116 MiB/s
elapsed: 25   ops: 262144   ops/sec: 10254.3   bytes/sec: 40 MiB/s

# rbd bench --io-type write --io-pattern rand image100 --pool=bench --io-size=4096 --io-total=2G --io-threads=128 bench  type write io_size 4096 io_threads 128 bytes 2147483648 pattern random
  SEC       OPS   OPS/SEC   BYTES/SEC
    1     28032   28159.7   110 MiB/s
    2     52864   26548.9   104 MiB/s
    3     78336   26154.4   102 MiB/s
    4    103040   25817.6   101 MiB/s
    5    128640     25774   101 MiB/s
    6    153216   25056.6    98 MiB/s
    7    178944   25215.8    98 MiB/s
    8    200960   24407.4    95 MiB/s
    9    218752   23086.8    90 MiB/s
   10    237952   21827.3    85 MiB/s
   11    258560   20901.4    82 MiB/s
   12    277760   19637.3    77 MiB/s
   13    295424     18999    74 MiB/s
   14    312064   18707.1    73 MiB/s
   15    328192   18076.8    71 MiB/s
   16    348800   18193.4    71 MiB/s
   17    374144   19400.8    76 MiB/s
   18    398080   20514.6    80 MiB/s
   19    422400     22067    86 MiB/s
   20    447360   23833.4    93 MiB/s
   21    471808   24601.4    96 MiB/s
   22    494592   24070.1    94 MiB/s
   23    518400   24063.8    94 MiB/s
elapsed: 23   ops: 524288   ops/sec: 22424.4   bytes/sec: 88 MiB/s

# rbd bench --io-type write --io-pattern rand image100 --pool=bench --io-size=40960 --io-total=10G --io-threads=16 bench  type write io_size 40960 io_threads 16 bytes 10737418240 pattern random
  SEC       OPS   OPS/SEC   BYTES/SEC
    1     20688     20787   812 MiB/s
    2     41488   20793.4   812 MiB/s
    3     61952   20683.4   808 MiB/s
    4     82480   20644.5   806 MiB/s
    5    103056   20630.7   806 MiB/s
    6    122032   20268.6   792 MiB/s
    7    142480   20198.2   789 MiB/s
    8    163360   20281.4   792 MiB/s
    9    182944   20092.6   785 MiB/s
   10    203120   20012.6   782 MiB/s
   11    223280   20249.4   791 MiB/s
   12    242352   19974.2   780 MiB/s
   13    262048   19737.4   771 MiB/s
elapsed: 13   ops: 262144   ops/sec: 20065.9   bytes/sec: 784 MiB/s

# rbd bench --io-type write --io-pattern rand image100 --pool=bench --io-size=40960 --io-total=10G --io-threads=128 bench  type write io_size 40960 io_threads 128 bytes 10737418240 pattern random
  SEC       OPS   OPS/SEC   BYTES/SEC
    1     21120   21333.1   833 MiB/s
    2     41728   20927.8   817 MiB/s
    3     62336   20848.9   814 MiB/s
    4     82816   20756.6   811 MiB/s
    5    103040   20649.9   807 MiB/s
    6    123008   20377.4   796 MiB/s
    7    144000   20470.6   800 MiB/s
    8    164224   20377.4   796 MiB/s
    9    183040   20028.6   782 MiB/s
   10    203136     20019   782 MiB/s
   11    223872   20156.5   787 MiB/s
   12    244480   20095.8   785 MiB/s
elapsed: 12   ops: 262144   ops/sec: 20314.8   bytes/sec: 794 MiB/s

Thanks


El 7/7/22 a las 9:34, Hans van den Bogert escribió:
Hi,

Run a close to the metal benchmark on the disks first, just to see the theoretical ceiling.

Also, rerun your benchmarks with random write, just to get more honest numbers as well.

Based on the numbers so far, you seem to be getting 40k client iops @512 threads, due to 3x replication and 3 nodes, this translates 1:1 to 40k per node. So ~10k per SSD. Depending on the benchmark directly on a disk (requested above) this can be either good or bad.

You might want to try 2 ceph-osd processes per SSD, just to see if the Ceph process is the bottleneck.

Hope this gives you food for thought.

On 7/6/22 13:13, Eneko Lacunza wrote:
Hi all,

We have a proof of concept HCI cluster with Proxmox v7 and Ceph v15.

We have 3 nodes:

2x Intel Xeon 5218 Gold (16 core/32 threads per socket)
Dell PERC H330 Controller (SAS3)
4xSamsung PM1634 3.84TB SAS 12Gb SSD
Network is LACP 2x10Gbps

This cluster is used for some VDI tests, with Windows 10 VMs.

Pool has size=3/min=2 and is used for RBD (KVM/QEMU VMs)

We are seeing Ceph performance reaching about 600MiB/s read and 500MiB/s write, and IOPS read about 6.000 and writes about 2.000 . Read/writes are simultaneous (mixed IO), as reported by Ceph.

Is this a reasonable performance for the hardware we have? We see about 25-30% CPU used in the nodes, and ceph-osd processes spiking between 600% and 1000% (I guess it's full 6-10 threads use).

I have checked cache for the disks, but they report cache as "Not applicable".
BIOS power profile is performance and C states are disabled.

Thanks

Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project

Tel. +34 943 569 206 |https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun

https://www.youtube.com/user/CANALBINOVO
https://www.linkedin.com/company/37269706/
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project

Tel. +34 943 569 206 |https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun

https://www.youtube.com/user/CANALBINOVO
https://www.linkedin.com/company/37269706/
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux