Re: Benchmark performance when using SSD as the journal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Mokhtar! This is what I am looking for, thanks for your explanation!

 

 

Best Regards,

Dave Chen

 

From: Maged Mokhtar <mmokhtar@xxxxxxxxxxx>
Sent: Wednesday, November 14, 2018 3:36 PM
To: Chen2, Dave; ceph-users@xxxxxxxxxxxxxx
Subject: Re: Benchmark performance when using SSD as the journal

 

[EXTERNAL EMAIL]
Please report any suspicious attachments, links, or requests for sensitive information.


Hi Dave,

The SSD journal will help boost iops  & latency which will be more apparent for small block sizes. The rados benchmark default block size is 4M, use the -b option to specify the size. Try at 4k, 32k, 64k ...
As a side note, this is a rados level test, the rbd image size is not relevant here.

Maged.

On 14/11/18 06:21, Dave.Chen@xxxxxxxx wrote:

Hi all,

 

We want to compare the performance between HDD partition as the journal (inline from OSD disk) and SSD partition as the journal, here is what we have done, we have 3 nodes used as Ceph OSD,  each has 3 OSD on it. Firstly, we created the OSD with journal from OSD partition, and run “rados bench” utility to test the performance, and then migrate the journal from HDD to SSD (Intel S4500) and run “rados bench” again, the expected result is SSD partition should be much better than HDD, but the result shows us there is nearly no change,

 

The configuration of Ceph is as below,

pool size: 3

osd size: 3*3

pg (pgp) num: 300

osd nodes are separated across three different nodes

rbd image size: 10G (10240M)

 

The utility I used is,

rados bench -p rbd $duration write

rados bench -p rbd $duration seq

rados bench -p rbd $duration rand

 

Is there anything wrong from what I did?  Could anyone give me some suggestion?

 

 

Best Regards,

Dave Chen

 




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux