Write back mode Cach-tier behavior

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

We’re using cache-tier with write-back mode but the write throughput is not as good as we expect. We use CephFS and create a 20GB file in it. While data is writing, we use iostat to get the disk statistics. From iostat, we saw that ssd (cache-tier) is idle most of the time and hdd (storage-tier) is busy all the time. From the document

When admins configure tiers with writeback mode, Ceph clients write data to the cache tier and receive an ACK from the cache tier. In time, the data written to the cache tier migrates to the storage tier and gets flushed from the cache tier.

So the data is write to cache-tier and then flush to storage tier when dirty ratio is more than 0.4? The word “in time” in the document confused me. 

We found that the throughput of creating a new file is slower than overwrite an existing file, and ssd has more write when doing overwrite. We then look into the source code and log. A newly created file goes to proxy_write, which is followed by a promote_object. Does this means that the object actually goes to storage pool directly and then be promoted to the cache-tier when creating a new file?

Thanks,
Ting Yi Lin
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux