Re: Hammer Cache Tiering

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

 

Already have Journals on SSD’s, but a Journal is only designed for very short write bursts, and is not going to help when I have someone writing for example a 100GB Backup file, where in my eyes a SSD Tier set to cache write’s only will allow the 100GB Write to be completed much quicker and with higher IOPS. Leaving the backend to respond to any Read requests, then any operations on the recent wrote file will be completed at the cache layer and pushed to the OSD at a later time when no longer required (colder storage).

 

It’s something I am looking to test and see if there is decent performance or not before I decide to keep or not, it was mainly to check there was no issues in the hammer release which could lead to corruption at FS level, which I have sent in a few old ML emails.

 

Thanks,

,Ashley

 

From: Christian Wuerdig [mailto:christian.wuerdig@xxxxxxxxx]
Sent: Wednesday, 2 November 2016 12:57 PM
To: Ashley Merrick <ashley@xxxxxxxxxxxxxx>
Cc: Christian Balzer <chibi@xxxxxxx>; ceph-users@xxxxxxxx
Subject: Re: [ceph-users] Hammer Cache Tiering

 

 

 

On Wed, Nov 2, 2016 at 5:19 PM, Ashley Merrick <ashley@xxxxxxxxxxxxxx> wrote:

Hello,

Thanks for your reply, when you say latest's version do you .6 and not .5?

The use case is large scale storage VM's, which may have a burst of high write's during new storage being loaded onto the environment, looking to place the SSD Cache in front currently with a replica of 3 and useable size of 1.5TB.

Looking to run in Read-forward Mode, so reads will come direct from the OSD layer where there is no issue with current read performance, however any large write's will first go to the SSD and then at a later date flushed to the OSD's as the SSD cache hits for example 60%.

So the use case is not as such to store hot DB data that will stay there, but to act as a temp sponge for high but short writes in bursts.

 

This is precisely what the journals are for. From what I've seen and read on this list so far I'd say you will be way better of putting your journals on SSDs in the OSD nodes than to try setting up a cache tier. In general using a cache for write buffer to me at least sounds the wrong way round - typically you want a cache for fast read access (i.e. serving very frequently read data as fast as possible).

 


,Ashley


-----Original Message-----
From: Christian Balzer [mailto:chibi@xxxxxxx]
Sent: Wednesday, 2 November 2016 11:48 AM
To: ceph-users@xxxxxxxx
Cc: Ashley Merrick <ashley@xxxxxxxxxxxxxx>
Subject: Re: [ceph-users] Hammer Cache Tiering


Hello,

On Tue, 1 Nov 2016 15:07:33 +0000 Ashley Merrick wrote:

> Hello,
>
> Currently using a Proxmox & CEPH cluster, currently they are running on Hammer looking to update to Jewel shortly, I know I can do a manual upgrade however would like to keep what is tested well with Proxmox.
>
> Looking to put a SSD Cache tier in front, however have seen and read there has been a few bug's with Cache Tiering causing corruption, from what I read all fixed on Jewel however not 100% if they have been pushed to Hammer (even though is still not EOL for a little while).
>
You will want to read at LEAST the last two threads about "cache tier" in this ML, more if you can.

> Is anyone running Cache Tiering on Hammer in production and had no issues, or is anyone aware of any bugs' / issues that means I should hold off till I upgrade to Jewel, or any reason basically to hold off for a month or so to update to Jewel before enabling a cache tier.
>
The latest Hammer should be fine, 0.94.5 has been working for me a long time, 0.94.6 is DEFINITELY to be avoided at all costs.

A cache tier is a complex beast.
Does it fit your needs/use patterns, can you afford to make it large enough to actually fit all your hot data in it?

Jewel has more control knobs to help you, so unless you are 100% sure that you know what you're doing or have a cache pool in mind that's as large as your current used data, waiting for Jewel might be a better proposition.

Of course the lack of any official response to the last relevant thread here about the future of cache tiering makes adding/designing a cache tier an additional challenge...


Christian
--
Christian Balzer        Network/Systems Engineer
chibi@xxxxxxx           Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux