Thanks,
Brett
On Monday, May 16, 2016, John Spray <jspray@xxxxxxxxxx> wrote:
On Monday, May 16, 2016, John Spray <jspray@xxxxxxxxxx> wrote:
On Mon, May 16, 2016 at 5:42 AM, Andrus, Brian Contractor
<bdandrus@xxxxxxx> wrote:
> So this ‘production ready’ CephFS for jewel seems a little not quite….
>
>
>
> Currently I have a single system mounting CephFS and merely scp-ing data to
> it.
>
> The CephFS mount has 168 TB used, 345 TB / 514 TB avail.
>
>
>
> Every so often, I get a HEALTH_WARN message of mds0: Client failing to
> respond to cache pressure
What client, what version?
> Even if I stop the scp, it will not go away until I umount/remount the
> filesystem.
>
>
>
> For testing, I had the cephfs mounted on about 50 systems and when updated
> started on the, I got all kinds of issues with it all.
All kinds of issues...? Need more specific bug reports than that to fix things.
John
> I figured having updated run on a few systems would be a good ‘see what
> happens’ if there is a fair amount of access to it.
>
>
>
> So, should I not be even considering using CephFS as a large storage mount
> for a compute cluster? Is there a sweet spot for what CephFS would be good
> for?
>
>
>
>
>
> Brian Andrus
>
> ITACS/Research Computing
>
> Naval Postgraduate School
>
> Monterey, California
>
> voice: 831-656-6238
>
>
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com