Re: Cache tier unexpected behavior: promote on lock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the answers!
As it leads to a decrease of caching efficiency, i've opened an issue:
http://tracker.ceph.com/issues/22528 

15.12.2017, 23:03, "Gregory Farnum" <gfarnum@xxxxxxxxxx>:
> On Thu, Dec 14, 2017 at 9:11 AM, Захаров Алексей <zakharov.a.g@xxxxxxxxx> wrote:
>>  Hi, Gregory,
>>  Thank you for your answer!
>>
>>  Is there a way to not promote on "locking", when not using EC pools?
>>  Is it possible to make this configurable?
>>
>>  We don't use EC pool. So, for us this meachanism is overhead. It only adds
>>  more load on both pools and network.
>
> Unfortunately I don't think there's an easy way to avoid it that
> exists right now. The caching is generally not set up well for
> handling these kinds of things, but it's possible the logic to proxy
> class operations onto replicated pools might not be *too*
> objectionable....
> -Greg
>
>>  14.12.2017, 01:16, "Gregory Farnum" <gfarnum@xxxxxxxxxx>:
>>
>>  Voluntary “locking” in RADOS is an “object class” operation. These are not
>>  part of the core API and cannot run on EC pools, so any operation using them
>>  will cause an immediate promotion.
>>  On Wed, Dec 13, 2017 at 4:02 AM Захаров Алексей <zakharov.a.g@xxxxxxxxx>
>>  wrote:
>>
>>  Hello,
>>
>>  I've found that when client gets lock on object then ceph ignores any
>>  promotion settings and promotes this object immedeatly.
>>
>>  Is it a bug or a feature?
>>  Is it configurable?
>>
>>  Hope for any help!
>>
>>  Ceph version: 10.2.10 and 12.2.2
>>  We use libradosstriper-based clients.
>>
>>  Cache pool settings:
>>  size: 3
>>  min_size: 2
>>  crash_replay_interval: 0
>>  pg_num: 2048
>>  pgp_num: 2048
>>  crush_ruleset: 0
>>  hashpspool: true
>>  nodelete: false
>>  nopgchange: false
>>  nosizechange: false
>>  write_fadvise_dontneed: false
>>  noscrub: true
>>  nodeep-scrub: false
>>  hit_set_type: bloom
>>  hit_set_period: 60
>>  hit_set_count: 30
>>  hit_set_fpp: 0.05
>>  use_gmt_hitset: 1
>>  auid: 0
>>  target_max_objects: 0
>>  target_max_bytes: 18819770744832
>>  cache_target_dirty_ratio: 0.4
>>  cache_target_dirty_high_ratio: 0.6
>>  cache_target_full_ratio: 0.8
>>  cache_min_flush_age: 60
>>  cache_min_evict_age: 180
>>  min_read_recency_for_promote: 15
>>  min_write_recency_for_promote: 15
>>  fast_read: 0
>>  hit_set_grade_decay_rate: 50
>>  hit_set_search_last_n: 30
>>
>>  To get lock via cli (to test behavior) we use:
>>  # rados -p poolname lock get --lock-tag weird_ceph_locks --lock-cookie
>>  `uuid` objectname striper.lock
>>  Right after that object could be found in caching pool.
>>
>>  --
>>  Regards,
>>  Aleksei Zakharov
>>  _______________________________________________
>>  ceph-users mailing list
>>  ceph-users@xxxxxxxxxxxxxx
>>  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>  --
>>  Regards,
>>  Aleksei Zakharov

-- 
Regards,
Aleksei Zakharov
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux