Cephfs upon Tiering

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Sep 15, 2014 at 6:32 AM, Berant Lemmenes <berant at lemmenes.com> wrote:
> Greg,
>
> So is the consensus that the appropriate way to implement this scenario is
> to have the fs created on the EC backing pool vs. the cache pool but that
> the UI check needs to be tweaked to distinguish between this scenario and
> just trying to use a EC pool alone?

Yeah, we'll fix this for Giant. In practical terms it doesn't make
much difference right now; just want to be consistent for the future.
:)
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com

>
> I'm also interested in the scenario of having a EC backed pool fronted by a
> replicated cache for use with cephfs.
>
> Thanks,
> Berant
>
> On Fri, Sep 12, 2014 at 12:37 PM, Gregory Farnum <greg at inktank.com> wrote:
>>
>> On Fri, Sep 12, 2014 at 1:53 AM, Kenneth Waegeman
>> <Kenneth.Waegeman at ugent.be> wrote:
>> >
>> > ----- Message from Sage Weil <sweil at redhat.com> ---------
>> >    Date: Thu, 11 Sep 2014 14:10:46 -0700 (PDT)
>> >    From: Sage Weil <sweil at redhat.com>
>> > Subject: Re: Cephfs upon Tiering
>> >      To: Gregory Farnum <greg at inktank.com>
>> >      Cc: Kenneth Waegeman <Kenneth.Waegeman at ugent.be>, ceph-users
>> > <ceph-users at lists.ceph.com>
>> >
>> >
>> >
>> >> On Thu, 11 Sep 2014, Gregory Farnum wrote:
>> >>>
>> >>> On Thu, Sep 11, 2014 at 11:39 AM, Sage Weil <sweil at redhat.com> wrote:
>> >>> > On Thu, 11 Sep 2014, Gregory Farnum wrote:
>> >>> >> On Thu, Sep 11, 2014 at 4:13 AM, Kenneth Waegeman
>> >>> >> <Kenneth.Waegeman at ugent.be> wrote:
>> >>> >> > Hi all,
>> >>> >> >
>> >>> >> > I am testing the tiering functionality with cephfs. I used a
>> >>> >> > replicated
>> >>> >> > cache with an EC data pool, and a replicated metadata pool like
>> >>> >> > this:
>> >>> >> >
>> >>> >> >
>> >>> >> > ceph osd pool create cache 1024 1024
>> >>> >> > ceph osd pool set cache size 2
>> >>> >> > ceph osd pool set cache min_size 1
>> >>> >> > ceph osd erasure-code-profile set profile11 k=8 m=3
>> >>> >> > ruleset-failure-domain=osd
>> >>> >> > ceph osd pool create ecdata 128 128 erasure profile11
>> >>> >> > ceph osd tier add ecdata cache
>> >>> >> > ceph osd tier cache-mode cache writeback
>> >>> >> > ceph osd tier set-overlay ecdata cache
>> >>> >> > ceph osd pool set cache hit_set_type bloom
>> >>> >> > ceph osd pool set cache hit_set_count 1
>> >>> >> > ceph osd pool set cache hit_set_period 3600
>> >>> >> > ceph osd pool set cache target_max_bytes $((280*1024*1024*1024))
>> >>> >> > ceph osd pool create metadata 128 128
>> >>> >> > ceph osd pool set metadata crush_ruleset 1 # SSD root in crushmap
>> >>> >> > ceph fs new ceph_fs metadata cache      <-- wrong ?
>> >>> >> >
>> >>> >> > I started testing with this, and this worked, I could write to it
>> >>> >> > with
>> >>> >> > cephfs and the cache was flushing to the ecdata pool as expected.
>> >>> >> > But now I notice I made the fs right upon the cache, instead of
>> >>> >> > the
>> >>> >> > underlying data pool. I suppose I should have done this:
>> >>> >> >
>> >>> >> > ceph fs new ceph_fs metadata ecdata
>> >>> >> >
>> >>> >> > So my question is: Was this wrong and not doing the things I
>> >>> >> > thought
>> >>> >> > it did,
>> >>> >> > or was this somehow handled by ceph and didn't it matter I
>> >>> >> > specified
>> >>> >> > the
>> >>> >> > cache instead of the data pool?
>> >>> >>
>> >>> >> Well, it's sort of doing what you want it to. You've told the
>> >>> >> filesystem to use the "cache" pool as the location for all of its
>> >>> >> data. But RADOS is pushing everything in the "cache" pool down to
>> >>> >> the
>> >>> >> "ecdata" pool.
>> >>> >> So it'll work for now as you want. But if in future you wanted to
>> >>> >> stop
>> >>> >> using the caching pool, or switch it out for a different pool
>> >>> >> entirely, that wouldn't work (whereas it would if the fs was using
>> >>> >> "ecdata").
>> >
>> >
>> > After this I tried with the 'ecdata' pool, which is not working because
>> > itself is an EC pool.
>> > So I guess specifying the cache pool is then indeed the only way, but
>> > that's
>> > ok then if that works.
>> > It is just a bit confusing to specify the cache pool rather than the
>> > data:)
>>
>> *blinks*
>> Uh, yeah. I forgot about that check, which was added because somebody
>> tried to use CephFS on an EC pool without a cache on top. We've obviously
>> got some UI work to do. Thanks for the reminder!
>> -Greg
>>
>>
>> --
>> Software Engineer #42 @ http://inktank.com | http://ceph.com
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux