Re: Case where a separate Bluestore WAL/DB device crashes...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Jonathan, your feedback is really interesting.

It makes me feel good to add separate SSDs for WAL/DBS partitions.

Thus, I have to implement a new Ceph cluster with 6 OSD nodes (that each contains 22 OSDs SAS 10k).
Following the recommandations on http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/ (The BlueStore journal will always be placed on the fastest device available...), I plan to add on each node :
  • a Write Intensive SSD drive for WAL partitions
  • a Mix Use SSD drive for DB partitions
rv

Le 01/03/2018 à 17:21, Jonathan Proulx a écrit :
On Thu, Mar 01, 2018 at 04:57:59PM +0100, Hervé Ballans wrote:

:Can we find recent benchmarks on this performance issue related to the
:location of WAL/DBs ?

I don't have benchmarks but I have some anecdotes.

we previously had 4T NLSAS (7.2k) filestore data drives with journals
on SSD (5:1 ssd:spinner).  We had unpleasant latency and at about 60%
space utilization we were 80%+ IOPs utilization.

We decided to go with smaller 2T but still slow 7.2k NLSAS drive for
next expantion to spread IOPS over more but still cheap spindles.
This coincided with bluestore going official in luminous so we did not
spec. SSD.

This worked out fairly well on 2T drives thay has similar but slightly
lower IOP utilization and dramaticly improved latency.

Based on this we decided ot do rolling conversions of older 4T servers
to bluestor (they were already luminous), removing the SSD layer with
an eye to making a performace pool out of them later.

This went poorly. Latency improved to the same exent we saw on newer
2T drive but IOPs frequently flatlined at 100% during deep scrubs
resulting in slow requests, blocked PGs and very sad VMs on top of it
all.

We went back and re-reformated the OSDs to use bluestor with db on
ssd.  This kept the improved latency characteristics and dropped IOPs
on spinning disks back to about the same maybe slightly less than
filestore was so not great but acceptable.

Much of this suffering is due to our budgetary requirements being clearer
than our performance requirements.  But atleast for slow spinners the
SSD can make a big impact, presumably if we had faster disk SSD would
have more marginal effects.

-Jon


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux