Re: Troubleshooting an erasure coded pool with a cache tier

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



When acting as a cache pool it needs to go do a lookup on the base pool for every object it hasn't encountered before. I assume that's why it's slower.
(The penalty should not be nearly as high as you're seeing here, but based on the low numbers I imagine you're running everything on an overloaded laptop or something.)
-Greg
On Sat, Nov 8, 2014 at 11:14 AM Loic Dachary <loic@xxxxxxxxxxx> wrote:
Hi,

This is a first attempt, it is entirely possible that the solution is simple or RTFM ;-)

Here is the problem observed:

rados --pool ec4p1 bench 120 write # the erasure coded pool
Total time run:         147.207804
Total writes made:      458
Write size:             4194304
Bandwidth (MB/sec):     12.445

rados --pool disks bench 120 write # same crush ruleset at the cache tier
Total time run:         126.312601
Total writes made:      1092
Write size:             4194304
Bandwidth (MB/sec):     34.581

There must be something wrong in how the cache tier is setup: one would expect the same write speed since the total size written (a few GB) is lower than the size of the cache pool. Instead the write speed is consistently at least twice slower (12.445 * 2 < 34.581).

root@g1:~# ceph osd dump | grep disks
pool 58 'disks' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 15110 lfor 12228 flags hashpspool stripe_width 0
root@g1:~# ceph osd dump | grep ec4
pool 74 'ec4p1' erasure size 5 min_size 4 crush_ruleset 2 object_hash rjenkins pg_num 32 pgp_num 32 last_change 15604 lfor 15604 flags hashpspool tiers 75 read_tier 75 write_tier 75 stripe_width 4096
pool 75 'ec4p1c' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 12 pgp_num 12 last_change 15613 flags hashpspool,incomplete_clones tier_of 74 cache_mode writeback target_bytes 1000000000 target_objects 1000000000 hit_set bloom{false_positive_probability: 0.05, target_size: 0, seed: 0} 3600s x1 stripe_width 0

root@g1:~# ceph df
GLOBAL:
    SIZE       AVAIL      RAW USED     %RAW USED
    26955G     18850G        6735G         24.99
POOLS:
    NAME            ID     USED       %USED     MAX AVAIL     OBJECTS
..
    disks           58      1823G      6.76         5305G      471080
..
    ec4p1           74       589G      2.19        12732G      153623
    ec4p1c          75     57501k         0         5305G         491


Cheers
--
Loïc Dachary, Artisan Logiciel Libre

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux