Cephfs upon Tiering

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Sep 12, 2014 at 1:53 AM, Kenneth Waegeman <Kenneth.Waegeman at ugent.be
<javascript:;>> wrote:
>
> ----- Message from Sage Weil <sweil at redhat.com <javascript:;>> ---------
>    Date: Thu, 11 Sep 2014 14:10:46 -0700 (PDT)
>    From: Sage Weil <sweil at redhat.com <javascript:;>>
> Subject: Re: Cephfs upon Tiering
>      To: Gregory Farnum <greg at inktank.com <javascript:;>>
>      Cc: Kenneth Waegeman <Kenneth.Waegeman at ugent.be <javascript:;>>,
ceph-users
> <ceph-users at lists.ceph.com <javascript:;>>
>
>
>
>> On Thu, 11 Sep 2014, Gregory Farnum wrote:
>>>
>>> On Thu, Sep 11, 2014 at 11:39 AM, Sage Weil <sweil at redhat.com
<javascript:;>> wrote:
>>> > On Thu, 11 Sep 2014, Gregory Farnum wrote:
>>> >> On Thu, Sep 11, 2014 at 4:13 AM, Kenneth Waegeman
>>> >> <Kenneth.Waegeman at ugent.be <javascript:;>> wrote:
>>> >> > Hi all,
>>> >> >
>>> >> > I am testing the tiering functionality with cephfs. I used a
>>> >> > replicated
>>> >> > cache with an EC data pool, and a replicated metadata pool like
>>> >> > this:
>>> >> >
>>> >> >
>>> >> > ceph osd pool create cache 1024 1024
>>> >> > ceph osd pool set cache size 2
>>> >> > ceph osd pool set cache min_size 1
>>> >> > ceph osd erasure-code-profile set profile11 k=8 m=3
>>> >> > ruleset-failure-domain=osd
>>> >> > ceph osd pool create ecdata 128 128 erasure profile11
>>> >> > ceph osd tier add ecdata cache
>>> >> > ceph osd tier cache-mode cache writeback
>>> >> > ceph osd tier set-overlay ecdata cache
>>> >> > ceph osd pool set cache hit_set_type bloom
>>> >> > ceph osd pool set cache hit_set_count 1
>>> >> > ceph osd pool set cache hit_set_period 3600
>>> >> > ceph osd pool set cache target_max_bytes $((280*1024*1024*1024))
>>> >> > ceph osd pool create metadata 128 128
>>> >> > ceph osd pool set metadata crush_ruleset 1 # SSD root in crushmap
>>> >> > ceph fs new ceph_fs metadata cache      <-- wrong ?
>>> >> >
>>> >> > I started testing with this, and this worked, I could write to it
>>> >> > with
>>> >> > cephfs and the cache was flushing to the ecdata pool as expected.
>>> >> > But now I notice I made the fs right upon the cache, instead of the
>>> >> > underlying data pool. I suppose I should have done this:
>>> >> >
>>> >> > ceph fs new ceph_fs metadata ecdata
>>> >> >
>>> >> > So my question is: Was this wrong and not doing the things I
thought
>>> >> > it did,
>>> >> > or was this somehow handled by ceph and didn't it matter I
specified
>>> >> > the
>>> >> > cache instead of the data pool?
>>> >>
>>> >> Well, it's sort of doing what you want it to. You've told the
>>> >> filesystem to use the "cache" pool as the location for all of its
>>> >> data. But RADOS is pushing everything in the "cache" pool down to the
>>> >> "ecdata" pool.
>>> >> So it'll work for now as you want. But if in future you wanted to
stop
>>> >> using the caching pool, or switch it out for a different pool
>>> >> entirely, that wouldn't work (whereas it would if the fs was using
>>> >> "ecdata").
>
>
> After this I tried with the 'ecdata' pool, which is not working because
> itself is an EC pool.
> So I guess specifying the cache pool is then indeed the only way, but
that's
> ok then if that works.
> It is just a bit confusing to specify the cache pool rather than the
data:)

*blinks*
Uh, yeah. I forgot about that check, which was added because somebody tried
to use CephFS on an EC pool without a cache on top. We've obviously got
some UI work to do. Thanks for the reminder!
-Greg


-- 
Software Engineer #42 @ http://inktank.com | http://ceph.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140912/95587975/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux