Re: Benchmark does not show gains with DB on SSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jan,

how did you move the WAL and DB to the SSD/NVMe? By recreating the OSDs or a different approach? Did you check afterwards that the devices were really used for that purpose? We had to deal with that a couple of months ago [1] and it's not really obvious if the new devices are really used.

Regards,
Eugen

[1] http://heiterbiswolkig.blogs.nde.ag/2018/04/08/migrating-bluestores-block-db/


Zitat von Ján Senko <jan.senko@xxxxxxxxx>:

We are benchmarking a test machine which has:
8 cores, 64GB RAM
12 * 12 TB HDD (SATA)
2 * 480 GB SSD (SATA)
1 * 240 GB SSD (NVME)
Ceph Mimic

Baseline benchmark for HDD only (Erasure Code 4+2)
Write 420 MB/s, 100 IOPS, 150ms latency
Read 1040 MB/s, 260 IOPS, 60ms latency

Now we moved WAL to the SSD (all 12 WALs on single SSD, default size
(512MB)):
Write 640 MB/s, 160 IOPS, 100ms latency
Read identical as above.

Nice boost we thought, so we moved WAL+DB to the SSD (Assigned 30GB for DB)
All results are the same as above!

Q: This is suspicious, right? Why is the DB on SSD not helping with our
benchmark? We use *rados bench*

We tried putting WAL on the NVME, and again, the results are the same as on
SSD.
Same for WAL+DB on NVME

Again, the same speed. Any ideas why we don't gain speed by using faster HW
here?

Jan

--
Jan Senko, Skype janos-
Phone in Switzerland: +41 774 144 602
Phone in Czech Republic: +420 777 843 818



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux