Re: Persistent Write Back Cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of John Spray
Sent: 04 March 2015 11:34
To: Nick Fisk; ceph-users@xxxxxxxxxxxxxx
Subject: Re: Persistent Write Back Cache

 

 

On 04/03/2015 08:26, Nick Fisk wrote:

To illustrate the difference a proper write back cache can make, I put a 1GB (512mb dirty threshold) flashcache in front of my RBD and tweaked the flush parameters to flush dirty blocks at a large queue depth. The same fio test (128k iodepth=1) now runs at 120MB/s and is limited by the performance of SSD used by flashcache, as everything is stored as 4k blocks on the ssd. In fact since everything is stored as 4k blocks, pretty much all IO sizes are accelerated to max speed of the SSD. Looking at iostat I can see all the IO’s are getting coalesced into nice large 512kb IO’s at a high queue depth, which Ceph easily swallows.

 

If librbd could support writing its cache out to SSD it would hopefully achieve the same level of performance and having it integrated would be really neat.

What are you hoping to gain from building something into ceph instead of using flashcache/bcache/dm-cache on top of it?  It seems like since you would anyway need to handle your HA configuration, setting up the actual cache device would be the simple part.

Cheers,
John

 

Hi John,

 

I guess it’s to make things easier rather than having to run a huge stack of different technologies to achieve the same goal, especially when half of the caching logic is already in Ceph. It would be really nice and drive adoption if you could could add a SSD, set a config option and suddenly you have a storage platform that performs 10x faster.

 

Another way of handling it might be for librbd to be pointed at a uuid instead of a /dev/sd* device. That way librbd knows what cache device to look for and will error out if the cache device is missing. These cache devices could then be presented to all necessary servers via iSCSI or something similar if the RBD will need to move around.

 

Nick


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux