Cephfs upon Tiering

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



----- Message from Sage Weil <sweil at redhat.com> ---------
    Date: Thu, 11 Sep 2014 14:10:46 -0700 (PDT)
    From: Sage Weil <sweil at redhat.com>
Subject: Re: Cephfs upon Tiering
      To: Gregory Farnum <greg at inktank.com>
      Cc: Kenneth Waegeman <Kenneth.Waegeman at ugent.be>, ceph-users  
<ceph-users at lists.ceph.com>


> On Thu, 11 Sep 2014, Gregory Farnum wrote:
>> On Thu, Sep 11, 2014 at 11:39 AM, Sage Weil <sweil at redhat.com> wrote:
>> > On Thu, 11 Sep 2014, Gregory Farnum wrote:
>> >> On Thu, Sep 11, 2014 at 4:13 AM, Kenneth Waegeman
>> >> <Kenneth.Waegeman at ugent.be> wrote:
>> >> > Hi all,
>> >> >
>> >> > I am testing the tiering functionality with cephfs. I used a replicated
>> >> > cache with an EC data pool, and a replicated metadata pool like this:
>> >> >
>> >> >
>> >> > ceph osd pool create cache 1024 1024
>> >> > ceph osd pool set cache size 2
>> >> > ceph osd pool set cache min_size 1
>> >> > ceph osd erasure-code-profile set profile11 k=8 m=3
>> >> > ruleset-failure-domain=osd
>> >> > ceph osd pool create ecdata 128 128 erasure profile11
>> >> > ceph osd tier add ecdata cache
>> >> > ceph osd tier cache-mode cache writeback
>> >> > ceph osd tier set-overlay ecdata cache
>> >> > ceph osd pool set cache hit_set_type bloom
>> >> > ceph osd pool set cache hit_set_count 1
>> >> > ceph osd pool set cache hit_set_period 3600
>> >> > ceph osd pool set cache target_max_bytes $((280*1024*1024*1024))
>> >> > ceph osd pool create metadata 128 128
>> >> > ceph osd pool set metadata crush_ruleset 1 # SSD root in crushmap
>> >> > ceph fs new ceph_fs metadata cache      <-- wrong ?
>> >> >
>> >> > I started testing with this, and this worked, I could write to it with
>> >> > cephfs and the cache was flushing to the ecdata pool as expected.
>> >> > But now I notice I made the fs right upon the cache, instead of the
>> >> > underlying data pool. I suppose I should have done this:
>> >> >
>> >> > ceph fs new ceph_fs metadata ecdata
>> >> >
>> >> > So my question is: Was this wrong and not doing the things I  
>> thought it did,
>> >> > or was this somehow handled by ceph and didn't it matter I  
>> specified the
>> >> > cache instead of the data pool?
>> >>
>> >> Well, it's sort of doing what you want it to. You've told the
>> >> filesystem to use the "cache" pool as the location for all of its
>> >> data. But RADOS is pushing everything in the "cache" pool down to the
>> >> "ecdata" pool.
>> >> So it'll work for now as you want. But if in future you wanted to stop
>> >> using the caching pool, or switch it out for a different pool
>> >> entirely, that wouldn't work (whereas it would if the fs was using
>> >> "ecdata").

After this I tried with the 'ecdata' pool, which is not working  
because itself is an EC pool.
So I guess specifying the cache pool is then indeed the only way, but  
that's ok then if that works.
It is just a bit confusing to specify the cache pool rather than the data:)

>> >>
>> >> We should perhaps look at prevent use of cache pools like this...hrm...
>> >> http://tracker.ceph.com/issues/9435
>> >
>> > Should we?  I was planning on doing exactly this for my home cluster.
>>
>> Not cache pools under CephFS, but specifying the cache pool as the
>> data pool (rather than some underlying pool). Or is there some reason
>> we might want the cache pool to be the one the filesystem is using for
>> indexing?
>
> Oh, right.  Yeah that's fine.  :)
>
> sage



----- End message from Sage Weil <sweil at redhat.com> -----

-- 

Met vriendelijke groeten,
Kenneth Waegeman



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux