Re: Cache tiering and cephfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I believe the reason we don't allow you to do this right now is that
there was not a good way of coordinating the transition (so that
everybody starts routing traffic through the cache pool at the same
time), which could lead to data inconsistencies. Looks like the OSDs
handle this appropriately now, though, so I'll create a bug.for
backport to giant. Until that happens I think you'll need to associate
the cache and base pool prior to giving them to the MDS; sorry.
-Greg

On Mon, Nov 17, 2014 at 1:07 PM, Scott Laird <scott@xxxxxxxxxxx> wrote:
> Hmm.  I'd rather not recreate by cephfs filesystem from scratch if I don't
> have do.  Has anyone managed to add a cache tier to a running cephfs
> filesystem?
>
>
> On Sun Nov 16 2014 at 1:39:47 PM Erik Logtenberg <erik@xxxxxxxxxxxxx> wrote:
>>
>> I know that it is possible to run CephFS with a cache tier on the data
>> pool in Giant, because that's what I do. However when I configured it, I
>> was on the previous release. When I upgraded to Giant, everything just
>> kept working.
>>
>> By the way when I set it up, I used the following commmands:
>>
>> ceph osd pool create cephfs-data 192 192 erasure
>> ceph osd pool create cephfs-metadata 192 192 replicated ssd
>> ceph osd pool create cephfs-data-cache 192 192 replicated ssd
>> ceph osd pool set cephfs-data-cache crush_ruleset 1
>> ceph osd pool set cephfs-metadata crush_ruleset 1
>> ceph osd tier add cephfs-data cephfs-data-cache
>> ceph osd tier cache-mode cephfs-data-cache writeback
>> ceph osd tier set-overlay cephfs-data cephfs-data-cache
>> ceph osd dump
>> ceph mds newfs 5 6 --yes-i-really-mean-it
>>
>> So actually I didn't add a cache tier to an existing CephFS, but first
>> made the pools and added CephFS directly after. In my case, the "ssd"
>> pool is ssd-backed (obviously), while the default pool is on rotating
>> media; the crush_ruleset 1 is meant to place both the cache pool and the
>> metadata pool on the ssd's.
>>
>> Erik.
>>
>>
>> On 11/16/2014 08:01 PM, Scott Laird wrote:
>> > Is it possible to add a cache tier to cephfs's data pool in giant?
>> >
>> > I'm getting a error:
>> >
>> > $ ceph osd tier set-overlay data data-cache
>> >
>> > Error EBUSY: pool 'data' is in use by CephFS via its tier
>> >
>> >
>> > From what I can see in the code, that comes from
>> > OSDMonitor::_check_remove_tier; I don't understand why set-overlay needs
>> > to call _check_remove_tier.  A quick look makes it look like set-overlay
>> > will always fail once MDS has been set up.  Is this a bug, or am I doing
>> > something wrong?
>> >
>> >
>> > Scott
>> >
>> >
>> >
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@xxxxxxxxxxxxxx
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux