[Single OSD performance on SSD] Can't go over 3, 2K IOPS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>>The results are with journal and data configured in the same SSD ?
yes

>>Also, how are you configuring your journal device, is it a block device ?
yes.

~ceph-deploy osd create node:sdb

# parted /dev/sdb
GNU Parted 2.3
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p                                                                
Model: ATA Crucial_CT1024M5 (scsi)
Disk /dev/sdb: 1024GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt

Number  Start   End     Size    File system  Name          Flags
 2      1049kB  5369MB  5368MB               ceph journal
 1      5370MB  1024GB  1019GB  xfs          ceph data

>>If journal and data are not in the same device result may change.
yes, sure of course

>>BTW, there are SSDs like SanDisk optimas drives that is using capacitor backed DRAM and thus always ignore these CMD_FLUSH command since drive guarantees that once data reaches drive, it will power fail safe.So, >>you don't need kernel patch. 

Oh, good to known! note that kernel patch is really usefull for theses cheap consumer crucial m550, but I don't see to much difference for intel s3500.


>>Optimus random write performance is ~15K (4K io_size). Presently, I don't have any write performance data (on ceph) with that, I will run some test with that soon and share.

Impresive results! I don't have choose yet my ssds model for my production cluster (target 2015),I'll have a look for this optimus drives


----- Mail original ----- 

De: "Somnath Roy" <Somnath.Roy at sandisk.com> 
?: "Mark Kirkwood" <mark.kirkwood at catalyst.net.nz>, "Alexandre DERUMIER" <aderumier at odiso.com>, "Sebastien Han" <sebastien.han at enovance.com> 
Cc: ceph-users at lists.ceph.com 
Envoy?: Mercredi 17 Septembre 2014 03:22:05 
Objet: RE: [Single OSD performance on SSD] Can't go over 3, 2K IOPS 

Hi Mark/Alexandre, 
The results are with journal and data configured in the same SSD ? 
Also, how are you configuring your journal device, is it a block device ? 
If journal and data are not in the same device result may change. 

BTW, there are SSDs like SanDisk optimas drives that is using capacitor backed DRAM and thus always ignore these CMD_FLUSH command since drive guarantees that once data reaches drive, it will power fail safe.So, you don't need kernel patch. Optimus random write performance is ~15K (4K io_size). Presently, I don't have any write performance data (on ceph) with that, I will run some test with that soon and share. 

Thanks & Regards 
Somnath 

-----Original Message----- 
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Mark Kirkwood 
Sent: Tuesday, September 16, 2014 3:36 PM 
To: Alexandre DERUMIER; Sebastien Han 
Cc: ceph-users at lists.ceph.com 
Subject: Re: [Single OSD performance on SSD] Can't go over 3, 2K IOPS 

On 17/09/14 08:39, Alexandre DERUMIER wrote: 
> Hi, 
> 
>>> I?m just surprised that you?re only getting 5299 with 0.85 since 
>>> I?ve been able to get 6,4K, well I was using the 200GB model 
> 
> Your model is 
> DC S3700 
> 
> mine is DC s3500 
> 
> with lower writes, so that could explain the difference. 
> 

Interesting - I was getting 8K IOPS with 0.85 on a 128G M550 - this suggests that the bottleneck is not only sync write performance (as your 
S3500 do much better there), but write performance generally (where the 
M550 is faster). 

Cheers 

Mark 

_______________________________________________ 
ceph-users mailing list 
ceph-users at lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

________________________________ 

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). 


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux