How does cache tier work in writeback mode?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi list,
I am testing cache tier in writeback mode.
The test resutl is confusing.The write performance is worse than without a cache tier.
 
The hot storage pool is an all ssd pool and the cold storage pool is an all hdd pool. I also created a hddpool and a ssdpool with the same crush rule as the cache tier pools for comparison.
The pool config:

tier

OSD

cap.(TB)

pg

OSD

cap.(TB)

pg

hot-pool

20

4.8

1024

ssd-pool

20

4.8

1024

cold-pool

140

1400

2048

hdd-pool

140

1400

2048

 
The cache tier config:

# ceph osd tier add cold-pool hot-pool

pool 'hot-pool' is now (or already was) a tier of 'cold-pool'

#

# ceph osd tier cache-mode hot-pool writeback

set cache-mode for pool 'hot-pool' to writeback

#

# ceph osd tier set-overlay cold-pool hot-pool

overlay for 'cold-pool' is now (or already was) 'hot-pool'

#

# ceph osd pool set hot-pool hit_set_type bloom

set pool 39 hit_set_type to bloom

#

# ceph osd pool set hot-pool hit_set_count 10

set pool 39 hit_set_count to 10

#

# ceph osd pool set hot-pool hit_set_period 3600

set pool 39 hit_set_period to 3600

#

# ceph osd pool set hot-pool target_max_bytes 2400000000000

set pool 39 target_max_bytes to 2400000000000

#

# ceph osd pool set hot-pool target_max_objects 300000

set pool 39 target_max_objects to 300000

#

# ceph osd pool set hot-pool cache_target_dirty_ratio 0.4

set pool 39 cache_target_dirty_ratio to 0.4

#

# ceph osd pool set hot-pool cache_target_dirty_high_ratio 0.6

set pool 39 cache_target_dirty_high_ratio to 0.6

#

# ceph osd pool set hot-pool cache_target_full_ratio 0.8

set pool 39 cache_target_full_ratio to 0.8

#

# ceph osd pool set hot-pool cache_min_flush_age 600

set pool 39 cache_min_flush_age to 600

#

# ceph osd pool set hot-pool cache_min_evict_age 1800

set pool 39 cache_min_evict_age to 1800

 
 
Write Test
 

cold-pool(tier)  write test for 10s

# rados bench -p cold-pool 10 write --no-cleanup

 

hdd-pool  write test for 10s

# rados bench -p hdd-pool 10 write --no-cleanup

 

ssd-pool  write test for 10s

# rados bench -p ssd-pool 10 write --no-cleanup

 

result:

                                    tier        hdd    ssd

objects                      695         737     2550

bandwith(MB/s)     272        289    1016

avg latency (s)         0.23      0.22   0.06

 

 

Read Test

 

# rados bench -p cold-pool 10 seq

 

# rados bench -p cold-pool 10 rand

 

# rados bench -p hdd-pool 10 seq

 

# rados bench -p hdd-pool 10 rand

 

# rados bench -p ssd-pool 10 seq

 

# rados bench -p ssd-pool 10 rand

 

seq result:

                                    tier        hdd        ssd

bandwith(MB/s)     806        789        1113

avg latency (s)         0.074    0.079    0.056

 

rand result:

 

                                    tier        hdd        ssd

bandwith(MB/s)     1106       790       1113

avg latency (s)         0.056    0.079    0.056

 

 

 

For my understanding the pool with cache tier  in writeback mode should performace like all ssd pool(client get ack after data write to hot storage)  if the cache dosen't need to be flushed.

But In wirte test,the pool with cache tier has poorer performance than even all hdd pool.

And I inspect the pool stat to find out that there is only 244 objects in the hot-pool and 695 objects in the cold pool(the write test wrote 695 objects).But for my setting 695 objects shouldn't trigger the flush.

 

Is there any setting or concept I wrongly understood ?

 

 
 
 
2018-02-09

lin.yunfan
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux