Re: PCIe journal benefit for SSD OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello, Alexandre!

  Do you have any testing methodology to share? I have a fresh test luminous 12.2.0 cluster with 4 nodes with 1 x 1.92TB Samsung sm863 + Infiniband each with unsupported setup (with co-located system\mon\osd partition and bluestore partition on the same drive created with modified ceph-disk). Would be very grateful for advice on how to do stress and performance testing. Already have raw SSD performance results with fio, have done some tests with rbd as well but not sure they are correct.
  Will be happy to do more tests and share results and will wait for your's of course!

  Thanks!

2017-09-07 22:31 GMT+05:00 Alexandre DERUMIER <aderumier@xxxxxxxxx>:
Hi Stefan

>>Have you already done tests how he performance changes with bluestore
>>while putting all 3 block devices on the same ssd?


I'm going to test bluestore with 3 nodes , 18 x intel s3610 1,6TB in coming weeks.

I'll send results on the mailing.



----- Mail original -----
De: "Stefan Priebe, Profihost AG" <s.priebe@xxxxxxxxxxxx>
À: "Christian Balzer" <chibi@xxxxxxx>, "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Envoyé: Jeudi 7 Septembre 2017 08:03:31
Objet: Re: PCIe journal benefit for SSD OSDs

Hello,
Am 07.09.2017 um 03:53 schrieb Christian Balzer:
>
> Hello,
>
> On Wed, 6 Sep 2017 09:09:54 -0400 Alex Gorbachev wrote:
>
>> We are planning a Jewel filestore based cluster for a performance
>> sensitive healthcare client, and the conservative OSD choice is
>> Samsung SM863A.
>>
>
> While I totally see where you're coming from and me having stated that
> I'll give Luminous and Bluestore some time to mature, I'd also be looking
> into that if I were being in the planning phase now, with like 3 months
> before deployment.
> The inherent performance increase with Bluestore (and having something
> that hopefully won't need touching/upgrading for a while) shouldn't be
> ignored.

Yes and that's the point where i'm currently as well. Thinking about how
to design a new cluster based on bluestore.

> The SSDs are fine, I've been starting to use those recently (though not
> with Ceph yet) as Intel DC S36xx or 37xx are impossible to get.
> They're a bit slower in the write IOPS department, but good enough for me.

I've never used the Intel DC ones but always the Samsung are the Intel
really faster? Have you disabled te FLUSH command for the Samsung ones?
They don't skip the command automatically like the Intel do. Sadly the
Samsung SM863 got more expensive over the last months. They were a lot
cheaper in the first month of 2016. May be the 2,5" optane intel ssds
will change the game.

>> but was wondering if anyone has seen a positive
>> impact from also using PCIe journals (e.g. Intel P3700 or even the
>> older 910 series) in front of such SSDs?
>>
> NVMe journals (or WAL and DB space for Bluestore) are nice and can
> certainly help, especially if Ceph is tuned accordingly.
> Avoid non DC NVMes, I doubt you can still get 910s, they are officially
> EOL.
> You want to match capabilities and endurances, a DC P3700 800GB would be
> an OK match for 3-4 SM863a 960GB for example.

That's a good point but makes the cluster more expensive. Currently
while using filestore i use one SSD for journal and data which works fine.

With bluestore we've block, db and wal so we need 3 block devices per
OSD. If we need one PCIe or NVMe device per 3-4 devices it get's much
more expensive per host - currently running 10 OSDs / SSDs per Node.

Have you already done tests how he performance changes with bluestore
while putting all 3 block devices on the same ssd?

Greets,
Stefan
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Best regards,
Vladimir
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux