moving rgw pools to ssd cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello!

I'am looking for method to make rgw ssd-cache tier with hdd

https://blog-fromsomedude.rhcloud.com/2015/11/06/Ceph-RadosGW-Placement-Targets/

I successfully created rgw pools for ssd as described above
And Placement Targets are written to users profile
So data can be written to any hdd or ssd pool
    "placement_targets": [
        {
            "name": "default-placement",
            "tags": ["default-placement"]
        },
        {
            "name": "fast-placement",
           "tags": ["fast-placement"]
        }
    ],
    "default_placement": "default-placement",


    "placement_pools": [
        {
            "key": "default-placement",
            "val": {
                "index_pool": "default.rgw.buckets.index",
                "data_pool": "default.rgw.buckets.data",
                "data_extra_pool": "default.rgw.buckets.non-ec",
                "index_type": 0
            }
        },
        {
            "key": "fast-placement",
            "val": {
                "index_pool": "default.rgw.fast.buckets.index",
                "data_pool": "default.rgw.fast.buckets.data",
                "data_extra_pool": "default.rgw.fast.buckets.non-ec",
                "index_type": 0
            }


        "keys": [
            {
                "user": "test2",
        "default_placement": "fast-placement",
        "placement_tags": ["default-placement", "fast-placement"],


NAME                                ID     USED       %USED     MAX AVAIL     OBJECTS
default.rgw.fast.buckets.data       16      2906M      0.13         2216G        4339
default.rgw.buckets.data            18     20451M      0.12        16689G        6944


Then I make tier:
ceph osd tier add default.rgw.buckets.data default.rgw.fast.buckets.data --force-nonempty
ceph osd tier cache-mode default.rgw.fast.buckets.data writeback
ceph osd tier set-overlay default.rgw.buckets.data default.rgw.fast.buckets.data

ceph osd pool set default.rgw.fast.buckets.data hit_set_type bloom
ceph osd pool set default.rgw.fast.buckets.data hit_set_count 1
ceph osd pool set default.rgw.fast.buckets.data hit_set_period 300
ceph osd pool set default.rgw.fast.buckets.data target_max_bytes 2200000000000
ceph osd pool set default.rgw.fast.buckets.data cache_min_flush_age 300
ceph osd pool set default.rgw.fast.buckets.data cache_min_evict_age 300
ceph osd pool set default.rgw.fast.buckets.data cache_target_dirty_ratio 0.01
ceph osd pool set default.rgw.fast.buckets.data cache_target_full_ratio 0.02


Put in some data, it felt down to ssd-cache and lower to hdd-pool.
Cluster has no active clients that could keep warm data.
But after 300 seconds no flushing/evicting. Only direct command works:

rados -p default.rgw.fast.buckets.data cache-flush-evict-all

How to fix?
(Jewel version 10.2.5)

--
Petr Malkov

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20170217/4d0c232b/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux