Re: CephFS is not maintianing conistency

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Feb 2, 2016 at 5:32 PM, Mykola Dvornik <mykola.dvornik@xxxxxxxxx> wrote:
> One of my clients is using
>
> 4.3.5-300.fc23.x86_64 (Fedora release 23)

did you encounter this problem on client using 4.3.5 kernel? If you
did, this issue should be ceph-fuse bug.

>
> while all the other clients reply on
>
> 3.10.0-327.4.4.el7.x86_64 (CentOS Linux release 7.2.1511)
>
> Should I file report a bug on the RedHat bugzilla?

you can open a bug at http://tracker.ceph.com/projects/cephfs/issues

Regards
Yan, Zheng

>
> On Tue, Feb 2, 2016 at 8:57 AM, Yan, Zheng <ukernel@xxxxxxxxx> wrote:
>
> On Tue, Feb 2, 2016 at 2:27 AM, Mykola Dvornik <mykola.dvornik@xxxxxxxxx>
> wrote:
>
> What version are you running on your servers and clients?
>
> Are you using 4.1 or 4.2 kernel?
> https://bugzilla.kernel.org/show_bug.cgi?id=104911. Upgrade to 4.3+ kernel
> or 4.1.17 kernel or 4.2.8 kernel can resolve this issue.
>
> On the clients: ceph-fuse --version ceph version 9.2.0
> (bb2ecea240f3a1d525bcb35670cb07bd1f0ca299) MDS/OSD/MON: ceph --version ceph
> version 9.2.0 (bb2ecea240f3a1d525bcb35670cb07bd1f0ca299) Exactly what
> changes are you making that aren't visible? I am creating some new files in
> non-root folders. What's the output of "ceph -s"? ceph -s cluster
> 98d72518-6619-4b5c-b148-9a781ef13bcb health HEALTH_OK monmap e1: 1 mons at
> {000-s-ragnarok=XXX.XXX.XXX.XXX:6789/0} election epoch 1, quorum 0
> 000-s-ragnarok mdsmap e576: 1/1/1 up {0=000-s-ragnarok=up:active} osdmap
> e233: 16 osds: 16 up, 16 in flags sortbitwise pgmap v1927636: 1088 pgs, 2
> pools, 1907 GB data, 2428 kobjects 3844 GB used, 25949 GB / 29793 GB avail
> 1088 active+clean client io 4381 B/s wr, 2 op In addition on the clients'
> side I have cat /etc/fuse.conf user_allow_other auto_cache large_read
> max_write = 16777216 max_read = 16777216 -Mykola On Mon, Feb 1, 2016 at 5:06
> PM, Gregory Farnum <gfarnum@xxxxxxxxxx> wrote: On Monday, February 1, 2016,
> Mykola Dvornik <mykola.dvornik@xxxxxxxxx> wrote:
>
> Hi guys, This is sort of rebuttal. I have a CephFS deployed and mounted on a
> couple of clients via ceph-fuse (due to quota support and possibility to
> kill the ceph-fuse process to avoid stale mounts). So the problems is that
> some times the changes made on one client are not visible on the others. It
> appears to me as rather random process. The only solution is to touch a new
> file in any particular folder that apparently triggers synchronization. I've
> been using a kernel-side client before with no such kind of problems. So the
> questions is it expected behavior of ceph-fuse?
>
> What version are you running on your servers and clients? Exactly what
> changes are you making that aren't visible? What's the output of "ceph -s"?
> We see bugs like this occasionally but I can't think of any recent ones in
> ceph-fuse -- they're actually seen a lot more often in the kernel client.
> -Greg
>
> Regards, Mykola
>
> _______________________________________________ ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux