Re: PCIe journal benefit for SSD OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 07.09.2017 um 10:22 schrieb Marc Roos:
>  
> Sorry to cut in your thread. 
> 
>> Have you disabled te FLUSH command for the Samsung ones?
> 
> We have a test cluster currently only with spinners pool, but we have 
> SM863 available to create the ssd pool. Is there something specific that 
> needs to be done for the SM863?

I've not tested how the SM863a behaves but at least with the "older"
SV843 and SM863 you need to disable the FLUSH command for those SSDs.
This is safe because they have a working capacitor to flush the writes
from cache itself.

You do this by writing the string "temporary write through" to
/sys/block/sdb/device/scsi_disk/*/cache_type

Greets,
Stefan

> 
> -----Original Message-----
> From: Stefan Priebe - Profihost AG [mailto:s.priebe@xxxxxxxxxxxx] 
> Sent: donderdag 7 september 2017 8:04
> To: Christian Balzer; ceph-users
> Subject: Re:  PCIe journal benefit for SSD OSDs
> 
> Hello,
> Am 07.09.2017 um 03:53 schrieb Christian Balzer:
>>
>> Hello,
>>
>> On Wed, 6 Sep 2017 09:09:54 -0400 Alex Gorbachev wrote:
>>
>>> We are planning a Jewel filestore based cluster for a performance 
>>> sensitive healthcare client, and the conservative OSD choice is 
>>> Samsung SM863A.
>>>
>>
>> While I totally see where you're coming from and me having stated that 
> 
>> I'll give Luminous and Bluestore some time to mature, I'd also be 
>> looking into that if I were being in the planning phase now, with like 
> 
>> 3 months before deployment.
>> The inherent performance increase with Bluestore (and having something 
> 
>> that hopefully won't need touching/upgrading for a while) shouldn't be 
> 
>> ignored.
> 
> Yes and that's the point where i'm currently as well. Thinking about how 
> to design a new cluster based on bluestore.
> 
>> The SSDs are fine, I've been starting to use those recently (though 
>> not with Ceph yet) as Intel DC S36xx or 37xx are impossible to get.
>> They're a bit slower in the write IOPS department, but good enough for 
> me.
> 
> I've never used the Intel DC ones but always the Samsung are the Intel 
> really faster? Have you disabled te FLUSH command for the Samsung ones?
> They don't skip the command automatically like the Intel do. Sadly the 
> Samsung SM863 got more expensive over the last months. They were a lot 
> cheaper  in the first month of 2016. May be the 2,5" optane intel ssds 
> will change the game.
> 
>>> but was wondering if anyone has seen a positive impact from also 
>>> using PCIe journals (e.g. Intel P3700 or even the older 910 series) 
>>> in front of such SSDs?
>>>
>> NVMe journals (or WAL and DB space for Bluestore) are nice and can 
>> certainly help, especially if Ceph is tuned accordingly.
>> Avoid non DC NVMes, I doubt you can still get 910s, they are 
>> officially EOL.
>> You want to match capabilities and endurances, a DC P3700 800GB would 
>> be an OK match for 3-4 SM863a 960GB for example.
> 
> That's a good point but makes the cluster more expensive. Currently 
> while using filestore i use one SSD for journal and data which works 
> fine.
> 
> With bluestore we've block, db and wal so we need 3 block devices per 
> OSD. If we need one PCIe or NVMe device per 3-4 devices it get's much 
> more expensive per host - currently running 10 OSDs / SSDs per Node.
> 
> Have you already done tests how he performance changes with bluestore 
> while putting all 3 block devices on the same ssd?
> 
> Greets,
> Stefan
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux