Re: Analysing ceph performance with SSD journal, 10gbe NIC and 2 replicas -Hammer release

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of kevin parrikar
Sent: 07 January 2017 13:11
To: Lionel Bouton <lionel-subscription@xxxxxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: [ceph-users] Analysing ceph performance with SSD journal, 10gbe NIC and 2 replicas -Hammer release

 

Thanks for your valuable input.
We were using these SSD in our NAS box(synology)  and it was giving 13k iops for our fileserver in raid1.We had a few spare disks which we added to our ceph nodes hoping that it will give good performance same as that of NAS box.(i am not comparing NAS with ceph ,just the reason why we decided to use these SSD)

We dont have S3520 or S3610 at the moment but can order one of these to see how it performs in ceph .We have 4xS3500  80Gb handy.
If i create a 2 node cluster with 2xS3500 each and with replica of 2,do you think it can deliver 24MB/s of 4k writes .
We bought S3500 because last time when we tried ceph, people were suggesting this model :) :) 

Thanks alot for your help

 

Was is your application/use case for Ceph? 24MB/s of 4k writes will be quite a hard number to hit at lower queue depths. I saw your benchmark in your other post showing you were testing at a queue depth of 32, is this representative of your real life workload?

 



 

On Sat, Jan 7, 2017 at 6:01 PM, Lionel Bouton <lionel-subscription@xxxxxxxxxxx> wrote:

Hi,

Le 07/01/2017 à 04:48, kevin parrikar a écrit :

i really need some help here :(

replaced all 7.2 rpm SAS disks with new Samsung 840 evo 512Gb SSD with no seperate journal Disk .Now both OSD nodes are with 2 ssd disks  with a replica of 2 .
Total number of OSD process in the cluster is 4.with all SSD.


These SSDs are not designed for the kind of usage you are putting them through. The Evo and even the Pro line from Samsung can't write both fast and securely (ie : you can write fast and lose data if you get a power outage or you can write slow and keep your data, Ceph always makes sure your data is recoverable before completing a write : it is slow with these SSDs).

Christian already warned you about endurance and reliability, you just discovered the third problem : speed.

Lionel

 


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux