Our corresponding RHCS downstream release of CephFS will be labeled "Tech Preview" which means its unsupported, but we believe that it's stable enough for experimentation. When we do release Cephfs as "production ready" that means we've done even more exhaustive testing and that this is a supported downstream product.
Since our downstream follows our upstream releases, we try to use the same terminology both up and downstream so that folks can better understand how to assess individual releases.
I hope that helps,
Brett
On Tuesday, May 17, 2016, Oliver Dzombic <info@xxxxxxxxxxxxxxxxx> wrote:
On Tuesday, May 17, 2016, Oliver Dzombic <info@xxxxxxxxxxxxxxxxx> wrote:
Hi Brett,
aside from the question if what Brian experience has anything to do with
code stability:
since this is new for me, that there is a difference between "stable"
and "production ready" i would be happy if you could tell me how the
table looks like.
One of the team was joking something like:
"stable" > "really stable" > "rock stable" > "pre-production ready" >
"production ready on your own risk" > "production ready, but can still
go boom" > "production ready, but still kinda wonky" > "EOL"
I did not check the past major releases in deep, but i think i never saw
something "bigger" but "stable" as the word to point out, that the code
is ready for usage, where you dont need to expect something too much
evil to happen.. alias "production ready".
So any clearification if this code is now stable, or not, and if yes,
what's the exact difference between "stable" and "production ready", is
very welcome.
Thank you !
--
Mit freundlichen Gruessen / Best regards
Oliver Dzombic
IP-Interactive
mailto:info@xxxxxxxxxxxxxxxxx
Anschrift:
IP Interactive UG ( haftungsbeschraenkt )
Zum Sonnenberg 1-3
63571 Gelnhausen
HRB 93402 beim Amtsgericht Hanau
Geschäftsführung: Oliver Dzombic
Steuer Nr.: 35 236 3622 1
UST ID: DE274086107
Am 16.05.2016 um 15:59 schrieb Brett Niver:
> The terminology we're using to describe CephFS in Jewel is "stable" as
> opposed to production ready.
>
> Thanks,
> Brett
>
> On Monday, May 16, 2016, John Spray <jspray@xxxxxxxxxx
> <mailto:jspray@xxxxxxxxxx>> wrote:
>
> On Mon, May 16, 2016 at 5:42 AM, Andrus, Brian Contractor
> <bdandrus@xxxxxxx> wrote:
> > So this ‘production ready’ CephFS for jewel seems a little not quite….
> >
> >
> >
> > Currently I have a single system mounting CephFS and merely
> scp-ing data to
> > it.
> >
> > The CephFS mount has 168 TB used, 345 TB / 514 TB avail.
> >
> >
> >
> > Every so often, I get a HEALTH_WARN message of mds0: Client failing to
> > respond to cache pressure
>
> What client, what version?
>
> > Even if I stop the scp, it will not go away until I umount/remount the
> > filesystem.
> >
> >
> >
> > For testing, I had the cephfs mounted on about 50 systems and when
> updated
> > started on the, I got all kinds of issues with it all.
>
> All kinds of issues...? Need more specific bug reports than that to
> fix things.
>
> John
>
> > I figured having updated run on a few systems would be a good ‘see
> what
> > happens’ if there is a fair amount of access to it.
> >
> >
> >
> > So, should I not be even considering using CephFS as a large
> storage mount
> > for a compute cluster? Is there a sweet spot for what CephFS would
> be good
> > for?
> >
> >
> >
> >
> >
> > Brian Andrus
> >
> > ITACS/Research Computing
> >
> > Naval Postgraduate School
> >
> > Monterey, California
> >
> > voice: 831-656-6238
> >
> >
> >
> >
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com