Re: RBD poor performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes (I mean yes, it's real). Ceph's tiering works by moving whole (4MB) objects to the cache pool, updating them there (with 4K random writes?) and evicting them back when cache is full. I.e. the bad part here is that it can't do "write-through".

Also there are some configuration options regarding the eviction process, you can try to tune them. But don't expect the basis to change: when the cache pool is full, Ceph will still need to evict something from there.

Why do you want cache tiering at all? Just use `allow_ec_overwrites=true` if you're using EC and mount your RBD/CephFS directly without a cache pool.

During one of my test i found that fio inside my VM generates 1 MiB/s
(about 150 IOPS), but `ceph -s' shows me 500 MiB/s of flushing and 280
MiB/s of evicting data. How it could be? Is it real? Do you have any
optimization policies inside CEPH to eliminate such behaviour?

--
With best regards,
  Vitaliy Filippov
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux