Re: Intel 520/530 SSD for ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> I used a blocksize of 350k as my graphes shows me that this is the
> average workload we have on the journal.

Pretty interesting metric Stefan.
Has anyone seen the same behaviour?

–––– 
Sébastien Han 
Cloud Engineer 

"Always give 100%. Unless you're giving blood.” 

Phone: +33 (0)1 49 70 99 72 
Mail: sebastien.han@xxxxxxxxxxxx 
Address : 10, rue de la Victoire - 75009 Paris 
Web : www.enovance.com - Twitter : @enovance 

On 22 Nov 2013, at 02:37, Mark Nelson <mark.nelson@xxxxxxxxxxx> wrote:

> On 11/21/2013 02:36 AM, Stefan Priebe - Profihost AG wrote:
>> Hi,
>> 
>> Am 21.11.2013 01:29, schrieb mdw@xxxxxxxxxxxx:
>>> On Tue, Nov 19, 2013 at 09:02:41AM +0100, Stefan Priebe wrote:
>>> ...
>>>>> You might be able to vary this behavior by experimenting with sdparm,
>>>>> smartctl or other tools, or possibly with different microcode in the drive.
>>>> Which values or which settings do you think of?
>>> ...
>>> 
>>> Off-hand, I don't know.  Probably the first thing would be
>>> to compare the configuration of your 520 & 530; anything that's
>>> different is certainly worth investigating.
>>> 
>>> This should display all pages,
>>> 	sdparm --all --long /dev/sdX
>>> the 520 only appears to have 3 pages, which can be fetched directly w/
>>> 	sdparm --page=ca --long /dev/sdX
>>> 	sdparm --page=co --long /dev/sdX
>>> 	sdparm --page=rw --long /dev/sdX
>>> 
>>> The sample machine I'm looking has an intel 520, and on ours,
>>> most options show as 0 except for
>>>   AWRE        1  [cha: n, def:  1]  Automatic write reallocation enabled
>>>   WCE         1  [cha: y, def:  1]  Write cache enable
>>>   DRA         1  [cha: n, def:  1]  Disable read ahead
>>>   GLTSD       1  [cha: n, def:  1]  Global logging target save disable
>>>   BTP        -1  [cha: n, def: -1]  Busy timeout period (100us)
>>>   ESTCT      30  [cha: n, def: 30]  Extended self test completion time (sec)
>>> Perhaps that's an interesting data point to compare with yours.
>>> 
>>> Figuring out if you have up-to-date intel firmware appears to require
>>> burning and running an iso image from
>>> https://downloadcenter.intel.com/Detail_Desc.aspx?agr=Y&DwnldID=18455
>>> 
>>> The results of sdparm --page=<whatever> --long /dev/sdc
>>> show the intel firmware, but this labels it better:
>>> smartctl -i /dev/sdc
>>> Our 520 has firmware "400i" loaded.
>> 
>> Firmware is up2date and all values are the same. I expect that the 520
>> firmware just ignores CMD_FLUSH commands and the 530 does not.
> 
> For those of you that don't follow LKML, there is some interesting discussion going on regarding this same issue (Hi Stefan!)
> 
> https://lkml.org/lkml/2013/11/20/158
> 
> Can anyone think of a reasonable (ie not yanking power out) way to test what CMD_FLUSH is actually doing?  I have some 520s in our test rig I can play with.  Otherwise, maybe an Intel engineer can chime in and let us know what's going on?
> 
>> 
>> Greets,
>> Stefan
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux