Yes stand-alone OSDs (WAL/DB/Data all on the same disk), this is the same as it was for Jewel / filestore. Even if they are consumer SSDs why would they be 40% faster with an older version of Ceph?
From: Mohamad Gebai <mgebai@xxxxxxx>
Date: Thursday, February 21, 2019 at 9:44 AM
To: "Smith, Eric" <Eric.Smith@xxxxxxxx>, Sinan Polat <sinan@xxxxxxxx>, "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: [ceph-users] BlueStore / OpenStack Rocky performance issues
What is your setup with Bluestore? Standalone OSDs? Or do they have their WAL/DB partitions on another device? How does it compare to your Filestore setup for the journal?
On a separate note, these look like they're consumer SSDs, which makes them not a great fit for Ceph.
Mohamad
On 2/21/19 9:29 AM, Smith, Eric wrote:
40% slower performance compared to Ceph Jewel / OpenStack Mitaka backed by the same SSDs
☹ I have 30 OSDs on SSDs (Samsung 860 EVO 1TB each)
Hi Eric,
40% slower performance compared to ..? Could you please share the current performance. How many OSD nodes do you have?
Regards,
Sinan
Op 21 februari 2019 om 14:19 schreef "Smith, Eric" <Eric.Smith@xxxxxxxx>:
Hey folks – I recently deployed Luminous / BlueStore on SSDs to back an OpenStack cluster that supports our build / deployment infrastructure and I’m getting 40% slower build times. Any thoughts on what I may need to do with
Ceph to speed things up? I have 30 SSDs backing an 11 compute node cluster.
Eric
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com