Re: BlueStore / OpenStack Rocky performance issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I will research the bluestore cache thanks for the tip. To answer your questions though…

 

  1. Measuring performance by the time it takes for my CI to deploy my application to OpenStack
  2. Workload is spin up / spin down of 5 instances, 4 of which have many different volumes attached (The same however for each deployment)
  3. The deployment used to take ~45 minutes (Filestore / Jewel) where as now it takes somewhere around 75 – 90 minutes (BlueStore / Luminous)

 

Thank you again for the information regarding cache – I will look into that.

Eric

 

From: Mohamad Gebai <mgebai@xxxxxxx>
Date: Thursday, February 21, 2019 at 11:50 AM
To: "Smith, Eric" <Eric.Smith@xxxxxxxx>, "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: [ceph-users] BlueStore / OpenStack Rocky performance issues

 

I didn't mean that the fact they are consumer SSDs is the reason for this performance impact. I was just pointing it out, unrelated to your problem.

40% is a lot more than one would expect to see. How are you measuring the performance? What is the workload and what numbers are you getting? What numbers did you used to get used to get with Filestore?

One of the biggest differences is that Filestore can make use of the page cache, whereas Bluestore manages its own cache. You can try increasing the Bluestore cache and see if it helps. Depending on the data set size and pattern, it might make a significant difference.

Mohamad

On 2/21/19 11:36 AM, Smith, Eric wrote:

Yes stand-alone OSDs (WAL/DB/Data all on the same disk), this is the same as it was for Jewel / filestore. Even if they are consumer SSDs why would they be 40% faster with an older version of Ceph?

 

From: Mohamad Gebai <mgebai@xxxxxxx>
Date: Thursday, February 21, 2019 at 9:44 AM
To: "Smith, Eric" <Eric.Smith@xxxxxxxx>, Sinan Polat <sinan@xxxxxxxx>, "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: [ceph-users] BlueStore / OpenStack Rocky performance issues

 

What is your setup with Bluestore? Standalone OSDs? Or do they have their WAL/DB partitions on another device? How does it compare to your Filestore setup for the journal?

On a separate note, these look like they're consumer SSDs, which makes them not a great fit for Ceph.

Mohamad


On 2/21/19 9:29 AM, Smith, Eric wrote:

40% slower performance compared to Ceph Jewel / OpenStack Mitaka backed by the same SSDs I have 30 OSDs on SSDs (Samsung 860 EVO 1TB each)

 

From: Sinan Polat <sinan@xxxxxxxx>
Sent: Thursday, February 21, 2019 8:43 AM
To: ceph-users@xxxxxxxxxxxxxx; Smith, Eric <Eric.Smith@xxxxxxxx>
Subject: Re: [ceph-users] BlueStore / OpenStack Rocky performance issues

 

Hi Eric,

40% slower performance compared to ..? Could you please share the current performance. How many OSD nodes do you have?

Regards,
Sinan

Op 21 februari 2019 om 14:19 schreef "Smith, Eric" <Eric.Smith@xxxxxxxx>:

Hey folks – I recently deployed Luminous / BlueStore on SSDs to back an OpenStack cluster that supports our build / deployment infrastructure and I’m getting 40% slower build times. Any thoughts on what I may need to do with Ceph to speed things up? I have 30 SSDs backing an 11 compute node cluster.

 

Eric


 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com






_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux