Re: Observation of bluestore db/wal performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



FWIW, the DB and WAL don't really do the same thing that the cache tier does.  The WAL is similar to filestore's journal, and the DB is primarily for storing metadata (onodes, blobs, extents, and OMAP data).  Offloading these things to an SSD will definitely help, but you won't see the same kind of behavior that you would see with cache tiering (especially if the workload is small enough to fit entirely in the cache tier).


IMHO the biggest performance consideration with cache tiering is when your workload doesn't fit entirely in the cache and you are evicting large quantities of data over the network.  Depending on a variety of factors this can be pretty slow (and in fact can be slower than not using a cache tier at all!).  If your workload fits entirely within the cache tier though, it's almost certainly going to be faster than bluestore without a cache tier.


Mark


On 7/21/19 9:39 AM, Shawn Iverson wrote:
Just wanted to post an observation here.  Perhaps someone with resources to perform some performance tests is interested in comparing or has some insight into why I observed this.

Background:

12 node ceph cluster
3-way replicated by chassis group
3 chassis groups
4 nodes per chassis
running Luminous (up to date)
heavy use of block storage for kvm virtual machines (proxmox)
some cephfs usage (<10%)
~100 OSDs
~100 pgs/osd
500GB average OSD capacity

I recently attempted to do away with my ssd cache tier on Luminous and replace it with bluestore with db/wal on ssd as this seemed to be a better practice, or so I thought.

Sadly, after 2 weeks of rebuilding OSDs and placing the db/wall on ssd, I was sorely disappointed with performance. My cluster performed poorly.  It seemed that the db/wal on ssd did not boost performance as I was used to having.  I used 60gb for the size.  Unfortunately, I did not have enough ssd capacity to make it any larger for my OSDs

Despite the words of caution on the Ceph docs in regard to replicated base tier and replicated cache-tier, I returned to cache tiering.

Performance has returned to expectations.

It would be interesting if someone had the spare iron and resources to benchmark bluestore OSDs with SSD db/wal against cache tiering and provide some statistics.

--
Shawn Iverson, CETL
Director of Technology
Rush County Schools
765-932-3901 option 7
iversons@xxxxxxxxxxxxxxxxxxx <mailto:iversons@xxxxxxxxxxxxxxxxxxx>

Cybersecurity

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux