Re: CephFS + cache tiering in Jewel

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Aug 24, 2016 at 11:21 PM, Burkhard Linke
<Burkhard.Linke@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote:
> Hi,
>
>
> On 08/24/2016 10:22 PM, Gregory Farnum wrote:
>>
>> On Tue, Aug 23, 2016 at 7:50 AM, Burkhard Linke
>> <Burkhard.Linke@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote:
>>>
>>> Hi,
>>>
>>> the Firefly and Hammer releases did not support transparent usage of
>>> cache
>>> tiering in CephFS. The cache tier itself had to be specified as data
>>> pool,
>>> thus preventing on-the-fly addition and removal of cache tiers.
>>>
>>> Does the same restriction also apply to Jewel? I would like to add a
>>> cache
>>> tier to an existing data pool.
>>
>> This got cleaned up a lot but is still a bit weird since you *can't*
>> use a bare EC pool on Ceph. I think right now you'll find that you can
>> add an EC pool to the CephFS data pools if it has a cache pool, but
>> doing so will prevent removing the cache pool.
>
> EC pools have been a problem in Firefly and Hammer, too. We removed them
> from our CephFS setup in the wake of the cache tiering error in Hammer.
>
> Does cache tiering work as expected with replicated pools? We use kernel
> based CephFS clients running kernel 4.6.6 on almost all machines.

I think so? I'm not entirely clear on what you mean and I don't work
with cache tier pools any more.
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux