Cache tiering and target_max_bytes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 08/14/2014 10:30 PM, Sage Weil wrote:
> On Thu, 14 Aug 2014, Pawe? Sadowski wrote:
>> W dniu 14.08.2014 17:20, Sage Weil pisze:
>>> On Thu, 14 Aug 2014, Pawe? Sadowski wrote:
>>>> Hello,
>>>>
>>>> I've a cluster of 35 OSD (30 HDD, 5 SSD) with cache tiering configured.
>>>> During tests it looks like ceph is not respecting target_max_bytes
>>>> settings. Steps to reproduce:
>>>>  - configure cache tiering
>>>>  - set target_max_bytes to 32G (on hot pool)
>>>>  - write more than 32G of data
>>>>  - nothing happens
> <snip details>
>
> The reason the agent is doing work is because you don't have 
> hit_set_* configured for the cache pool, which means the cluster isn't 
> tracking what objects get read to inform the flush/evict 
> decisions.  Configuring that will fix this.  Try
>
>  ceph osd pool set cache hit_set_type bloom
>  ceph osd pool set cache hit_set_count 8
>  ceph osd pool set cache hit_set_period 3600
>
> or similar.
>
> The agent could still run in a brain-dead mode without it, but it suffers 
> from the bug you found.  That was fixed after 0.80.5 and will be in 
> 0.80.6.
Thanks!

PS


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux