Re: can cache-mode be set to readproxy for tier cache with ceph 0.94.9 ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Tue, Dec 13, 2016 at 4:38 PM, JiaJia Zhong <zhongjiajia@xxxxxxxxxxxx> wrote:
hi cephers:
    we are using ceph hammer 0.94.9,  yes, It's not the latest ( jewel),
    with some ssd osds for tiering,  cache-mode is set to readproxy, everything seems to be as expected,
    but when reading some small files from cephfs, we got 0 bytes.

Would you be able to share:

 #1 How small is actual data?
 #2 Is the symptom reproduceable with same size of different data?
 #3 can you share your ceph.conf(ceph --show-config)?
 
    
    I did some search and got the below link, 
    that's almost the same as what we are suffering from except  the cache-mode in the link is writeback, ours is readproxy.
    
    that bug shall have been FIXED in 0.94.9 (http://tracker.ceph.com/issues/12551
    but we still can encounter with that occasionally :(

   enviroment:
     - ceph: 0.94.9
     - kernel client: 4.2.0-36-generic ( ubuntu 14.04 )
     - any others needed ?

   Question:
   1.  does readproxy mode work on ceph0.94.9 ? since there are only writeback and readonly in  the document for hammer.
   2.  any one with (Jewel or Hammer) met the same issue ?


    loop Yan, Zheng
   Quote from the link for convince.
 """

I am experiencing an issue with CephFS with cache tiering where the kernel
clients are reading files filled entirely with 0s.

The setup:
ceph 0.94.3
create cephfs_metadata replicated pool
create cephfs_data replicated pool
cephfs was created on the above two pools, populated with files, then:
create cephfs_ssd_cache replicated pool,
then adding the tiers:
ceph osd tier add cephfs_data cephfs_ssd_cache
ceph osd tier cache-mode cephfs_ssd_cache writeback
ceph osd tier set-overlay cephfs_data cephfs_ssd_cache

While the cephfs_ssd_cache pool is empty, multiple kernel clients on
different hosts open the same file (the size of the file is small, <10k) at
approximately the same time. A number of the clients from the OS level see
the entire file being empty. I can do a rados -p {cache pool} ls for the
list of files cached, and do a rados -p {cache pool} get {object} /tmp/file
and see the complete contents of the file.
I can repeat this by setting cache-mode to forward, rados -p {cache pool}
cache-flush-evict-all, checking no more objects in cache with rados -p
{cache pool} ls, resetting cache-mode to writeback with an empty pool, and
doing the multiple same file opens.

Has anyone seen this issue? It seems like what may be a race condition
where the object is not yet completely loaded into the cache pool so the
cache pool serves out an incomplete object.
If anyone can shed some light or any suggestions to help debug this issue,
that would be very helpful.

Thanks,
Arthur"""



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux