Re: Benchmark does not show gains with DB on SSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Eugene:
Between tests we destroyed the OSDs and created them from scratch. We used Docker image to deploy Ceph on one machine.
I've seen that there are WAL/DB partitions created on the disks.
Should I also check somewhere in ceph config that it actually uses those?

David:
We used 4MB writes.

I know about the recommended journal size, however this is the machine we have at the moment.
For final production I can change the size of SSD (if it makes sense)
The benchmark hasn't filled the 30GB of DB in the time it was running, so I doubt that having properly sized DB would change the results.
(It wrote 38GB per minute of testing, spread across 12 disks, with 50% EC overhead, therefore about 5GB/minute)

Jan

st 12. 9. 2018 o 17:36 David Turner <drakonstein@xxxxxxxxx> napísal(a):
If you're writes are small enough (64k or smaller) they're being placed on the WAL device regardless of where your DB is.  If you change your testing to use larger writes you should see a difference by adding the DB.

Please note that the community has never recommended using less than 120GB DB for a 12TB OSD and the docs have come out and officially said that you should use at least a 480GB DB for a 12TB OSD.  If you're setting up your OSDs with a 30GB DB, you're just going to fill that up really quick and spill over onto the HDD and have wasted your money on the SSDs.

On Wed, Sep 12, 2018 at 11:07 AM Ján Senko <jan.senko@xxxxxxxxx> wrote:
We are benchmarking a test machine which has:
8 cores, 64GB RAM
12 * 12 TB HDD (SATA)
2 * 480 GB SSD (SATA)
1 * 240 GB SSD (NVME)
Ceph Mimic

Baseline benchmark for HDD only (Erasure Code 4+2)
Write 420 MB/s, 100 IOPS, 150ms latency
Read 1040 MB/s, 260 IOPS, 60ms latency

Now we moved WAL to the SSD (all 12 WALs on single SSD, default size (512MB)):
Write 640 MB/s, 160 IOPS, 100ms latency
Read identical as above.

Nice boost we thought, so we moved WAL+DB to the SSD (Assigned 30GB for DB)
All results are the same as above!

Q: This is suspicious, right? Why is the DB on SSD not helping with our benchmark? We use rados bench

We tried putting WAL on the NVME, and again, the results are the same as on SSD.
Same for WAL+DB on NVME

Again, the same speed. Any ideas why we don't gain speed by using faster HW here?

Jan

--
Jan Senko, Skype janos-
Phone in Switzerland: +41 774 144 602
Phone in Czech Republic: +420 777 843 818
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Jan Senko, Skype janos-
Phone in Switzerland: +41 774 144 602
Phone in Czech Republic: +420 777 843 818
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux