Observation of bluestore db/wal performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Just wanted to post an observation here.  Perhaps someone with resources to perform some performance tests is interested in comparing or has some insight into why I observed this.

Background:

12 node ceph cluster
3-way replicated by chassis group
3 chassis groups
4 nodes per chassis
running Luminous (up to date)
heavy use of block storage for kvm virtual machines (proxmox)
some cephfs usage (<10%)
~100 OSDs
~100 pgs/osd
500GB average OSD capacity

I recently attempted to do away with my ssd cache tier on Luminous and replace it with bluestore with db/wal on ssd as this seemed to be a better practice, or so I thought.

Sadly, after 2 weeks of rebuilding OSDs and placing the db/wall on ssd, I was sorely disappointed with performance.  My cluster performed poorly.  It seemed that the db/wal on ssd did not boost performance as I was used to having.  I used 60gb for the size.  Unfortunately, I did not have enough ssd capacity to make it any larger for my OSDs

Despite the words of caution on the Ceph docs in regard to replicated base tier and replicated cache-tier, I returned to cache tiering.

Performance has returned to expectations.

It would be interesting if someone had the spare iron and resources to benchmark bluestore OSDs with SSD db/wal against cache tiering and provide some statistics.

--
Shawn Iverson, CETL
Director of Technology
Rush County Schools
765-932-3901 option 7

Cybersecurity
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux