Stalling IO with cache tier

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi list,

Our current Ceph production cluster seems to cope with performance
issues, so we decided to add a fully flash based cache tier (now running
with spinners and journals on separate SSDs).

We ordered SSDs (Intel), disk trays and read
http://docs.ceph.com/docs/hammer/rados/operations/cache-tiering/
carefully. Afterwards a new pool was created in a separate root,
assigned with a ruleset matching flash-only OSDs only.

Since adding and removing the cache tier could be done transparantly, we
decided to get going in order to save time and improve performance as
soon as possible:

> # ceph osd tier add cinder-volumes cache
> pool 'cache' is now (or already was) a tier of 'cinder-volumes'
> # ceph osd tier cache-mode cache writeback
> set cache-mode for pool 'cache' to writeback
> # ceph osd tier set-overlay cinder-volumes cache
> overlay for 'cinder-volumes' is now (or already was) 'cache'
> # ceph osd pool set cache hit_set_type bloom
> set pool 6 hit_set_type to bloom
> # ceph osd pool set cache hit_set_count 1
> set pool 6 hit_set_count to 1
> # ceph osd pool set cache hit_set_period 3600
> set pool 6 hit_set_period to 3600
> # ceph osd pool set cache target_max_bytes 257698037760
> set pool 6 target_max_bytes to 257698037760
> # ceph osd pool set cache cache_target_full_ratio 0.8
> set pool 6 cache_target_full_ratio to 0.8
Yes, full flash cache here we go! Or, is it?

After a few minutes, all hell broke loose and it seemed all IO on our
cluster was stalling and no objects were to be found in the new cache
pool called cache.

Luckily we were able to remove the cache tier in a few moments again,
restoring storage services.

The storage cluster backs both Cinder and Glance services with OpenStack.

Could someone please give some pointers in how to debug this? Log files
seem a little "voidy" on the matter, I'm afraid.

Thanks in advance! It would be great if we could implement the cache
tier again in the near future, improving performance.

Cheers,
Kees

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux