Re: can cache-mode be set to readproxy for tiercachewith ceph 0.94.9 ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You don't need to disconnect any clients from the RADOS cluster.
Tiering configuration should be transparent to Ceph clients.

On Fri, Dec 16, 2016 at 5:57 PM, JiaJia Zhong <zhongjiajia@xxxxxxxxxxxx> wrote:
> hi skinjo,
> forgot to ask that if it's necessary to disconnect all the client before
> doing set-overlay ? we didn't sweep the clients out while setting overlay
>
> ------------------ Original ------------------
> From:  "JiaJia Zhong"<zhongjiajia@xxxxxxxxxxxx>;
> Date:  Wed, Dec 14, 2016 11:24 AM
> To:  "Shinobu Kinjo"<skinjo@xxxxxxxxxx>;
> Cc:  "CEPH list"<ceph-users@xxxxxxxxxxxxxx>; "ukernel"<ukernel@xxxxxxxxx>;
> Subject:  Re:  can cache-mode be set to readproxy for
> tiercachewith ceph 0.94.9 ?
>
> ------------------ Original ------------------
> From: "Shinobu Kinjo"<skinjo@xxxxxxxxxx>;
> Date: Wed, Dec 14, 2016 10:56 AM
> To: "JiaJia Zhong"<zhongjiajia@xxxxxxxxxxxx>;
> Cc: "CEPH list"<ceph-users@xxxxxxxxxxxxxx>; "ukernel"<ukernel@xxxxxxxxx>;
> Subject: Re:  can cache-mode be set to readproxy for
> tiercachewith ceph 0.94.9 ?
>
>
>> ps: When we first met this issue, restarting the mds could cure that. (but
>> that was ceph 0.94.1).
>
> Is this still working?
> I think It's hard to reproduce the issue, the cluster woks well now.
> Since you're using 0.94.9, bug(#12551) you mentioned was fixed.
>
> Can you do the followings to see object appear to you as ZERO size is
> actually there:
> # rados -p ${cache pool} ls
> # rados -p ${cache pool} get ${object} /tmp/file
> # ls -l /tmp/file
> I did these with the ZERO file, It's the original OK file via rados get,
> It's only not normal in cephfs
> ------------------ Original ------------------
> From:  "Shinobu Kinjo"<skinjo@xxxxxxxxxx>;
> Date:  Tue, Dec 13, 2016 06:21 PM
> To:  "JiaJia Zhong"<zhongjiajia@xxxxxxxxxxxx>;
> Cc:  "CEPH list"<ceph-users@xxxxxxxxxxxxxx>; "ukernel"<ukernel@xxxxxxxxx>;
> Subject:  Re:  can cache-mode be set to readproxy for tier
> cachewith ceph 0.94.9 ?
>
>
>
> On Tue, Dec 13, 2016 at 4:38 PM, JiaJia Zhong <zhongjiajia@xxxxxxxxxxxx>
> wrote:
>>
>> hi cephers:
>>     we are using ceph hammer 0.94.9,  yes, It's not the latest ( jewel),
>>     with some ssd osds for tiering,  cache-mode is set to readproxy,
>> everything seems to be as expected,
>>     but when reading some small files from cephfs, we got 0 bytes.
>
>
> Would you be able to share:
>
> #1 How small is actual data?
> #2 Is the symptom reproduceable with same size of different data?
> #3 can you share your ceph.conf(ceph --show-config)?
>
>>
>>
>>     I did some search and got the below link,
>>
>> http://ceph-users.ceph.narkive.com/g4wcB8ED/cephfs-with-cache-tiering-reading-files-are-filled-with-0s
>>     that's almost the same as what we are suffering from except  the
>> cache-mode in the link is writeback, ours is readproxy.
>>
>>     that bug shall have been FIXED in 0.94.9
>> (http://tracker.ceph.com/issues/12551)
>>     but we still can encounter with that occasionally :(
>>
>>    enviroment:
>>      - ceph: 0.94.9
>>      - kernel client: 4.2.0-36-generic ( ubuntu 14.04 )
>>      - any others needed ?
>>
>>    Question:
>>    1.  does readproxy mode work on ceph0.94.9 ? since there are only
>> writeback and readonly in  the document for hammer.
>>    2.  any one with (Jewel or Hammer) met the same issue ?
>>
>>
>>     loop Yan, Zheng
>>    Quote from the link for convince.
>>  """
>> Hi,
>>
>> I am experiencing an issue with CephFS with cache tiering where the kernel
>> clients are reading files filled entirely with 0s.
>>
>> The setup:
>> ceph 0.94.3
>> create cephfs_metadata replicated pool
>> create cephfs_data replicated pool
>> cephfs was created on the above two pools, populated with files, then:
>> create cephfs_ssd_cache replicated pool,
>> then adding the tiers:
>> ceph osd tier add cephfs_data cephfs_ssd_cache
>> ceph osd tier cache-mode cephfs_ssd_cache writeback
>> ceph osd tier set-overlay cephfs_data cephfs_ssd_cache
>>
>> While the cephfs_ssd_cache pool is empty, multiple kernel clients on
>> different hosts open the same file (the size of the file is small, <10k)
>> at
>> approximately the same time. A number of the clients from the OS level see
>> the entire file being empty. I can do a rados -p {cache pool} ls for the
>> list of files cached, and do a rados -p {cache pool} get {object}
>> /tmp/file
>> and see the complete contents of the file.
>> I can repeat this by setting cache-mode to forward, rados -p {cache pool}
>> cache-flush-evict-all, checking no more objects in cache with rados -p
>> {cache pool} ls, resetting cache-mode to writeback with an empty pool, and
>> doing the multiple same file opens.
>>
>> Has anyone seen this issue? It seems like what may be a race condition
>> where the object is not yet completely loaded into the cache pool so the
>> cache pool serves out an incomplete object.
>> If anyone can shed some light or any suggestions to help debug this issue,
>> that would be very helpful.
>>
>> Thanks,
>> Arthur"""
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux