Re: Steps for Adding Cache Tier

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Fri, 13 May 2016 11:57:24 -0400 MailingLists - EWS wrote:

> I have been reading a lot of information about cache-tiers, and I wanted
> to know how best to go about adding the cache-tier to a production
> environment.
>

Did you read my thread titled "Cache tier operation clarifications" and
related posts?

>  
> 
> Our current setup is Infernalis (9.2.1) 4 nodes with 8 x 4TB SATA drives
> per node and 2 x 400GB NVMe acting as journals (1:4 ratio). There is a
> bunch of spare space on the NVMe's so we would like to partition that
> and make them OSDs for a cache-tier. Each NVMe should have about 200GB
> of space available on them giving us plenty of cache space (8 x 200GB),
> placing the journals on the NVMe since they have more than enough
> bandwidth.
> 

That's likely the last Infernalis release, no more bugfixes for it, so you
should consider going to Jewel once it had time to settle a bit. 

Jewel also has much improved cache tiering bits.

I assume those are Intel DC P3700 NVMes?

While they have a very nice 10 DWPD endurance, keep in mind that now each
write will potentially (depending on your promotion settings) get
amplified 3 times per NVMe: 
once for the cache tier, 
once for the journal on that cache tier 
and once (eventually) when the data gets flushed to the base tier.

And that's 4x200GB effective cache space of course, because even with the
most reliable and monitored SSDs you want/need a replication of 2 at least.

>  
> 
> Our primary usage for Ceph at this time is powering RBD block storage
> for an OpenStack cluster. The vast majority of our users use the system
> mainly for long term storage (store and hold data) but we do get some
> "hotspots" from time to time and we want to help smooth those out a
> little bit.
> 
>  
> 
> I have read this page:
> http://docs.ceph.com/docs/master/rados/operations/cache-tiering/ and
> believe that I have a handle on most of that.
> 
>  
> 
> I recall some additional information regarding permissions for block
> device access (making sure that your cephx permissions allow access to
> the cache-tier pool).
> 
Not using Openstack, but I don't think so.
>From a client perspective, it is talking to the original pool, the cache
is transparently overlaid.


>  
> 
> Our plan is:
> 
>  
> 
> -          partition the NVMe's and create the OSDs manually with a 0
> weight
> 
You will want to create a new root and buckets before creating the OSDs.

> 
> -          create our new cache pool, and adjust the crushmap to place
> the cache pool on these OSDs
>

Since you will have multiple roots on the same node, you will need to set
"osd crush update on start = false".
 
> -          make sure permissions and settings are taken care of (making
> sure our cephx volumes user has rwx on the cache-tier pool)
>
Again, doubt that is needed, but that's what even a tiny, crappy test or
staging environment is for.
 
> -          add the cache-tier to our volumes pool
> 
> -          ???
> 
> -          Profit!
> 
>  
Pretty much.

> 
> Is there anything we might be missing here? Are there any other issues
> that we might need to be aware of? I seem to recall some discussion on
> the list with regard to settings that were required to make caching work
> correctly, but my memory seems to indicate that these changes were
> already added to the page listed above. Is that assumption correct?
> 
> 
Again, this is the kind of operation you want to get comfortable with on a
test cluster first.

Regards, 

Christian
 
> 
> Tom Walsh
> 
> https://expresshosting.net/
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux