Re: failing to respond to cache pressure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Brett,

aside from the question if what Brian experience has anything to do with
code stability:

since this is new for me, that there is a difference between "stable"
and "production ready" i would be happy if you could tell me how the
table looks like.

One of the team was joking something like:

"stable" > "really stable" > "rock stable" > "pre-production ready" >
"production ready on your own risk" > "production ready, but can still
go boom" > "production ready, but still kinda wonky" > "EOL"


I did not check the past major releases in deep, but i think i never saw
something "bigger" but "stable" as the word to point out, that the code
is ready for usage, where you dont need to expect something too much
evil to happen.. alias "production ready".

So any clearification if this code is now stable, or not, and if yes,
what's the exact difference between "stable" and "production ready", is
very welcome.

Thank you !

-- 
Mit freundlichen Gruessen / Best regards

Oliver Dzombic
IP-Interactive

mailto:info@xxxxxxxxxxxxxxxxx

Anschrift:

IP Interactive UG ( haftungsbeschraenkt )
Zum Sonnenberg 1-3
63571 Gelnhausen

HRB 93402 beim Amtsgericht Hanau
Geschäftsführung: Oliver Dzombic

Steuer Nr.: 35 236 3622 1
UST ID: DE274086107


Am 16.05.2016 um 15:59 schrieb Brett Niver:
> The terminology we're using to describe CephFS in Jewel is "stable" as
> opposed to production ready.
> 
> Thanks,
> Brett
> 
> On Monday, May 16, 2016, John Spray <jspray@xxxxxxxxxx
> <mailto:jspray@xxxxxxxxxx>> wrote:
> 
>     On Mon, May 16, 2016 at 5:42 AM, Andrus, Brian Contractor
>     <bdandrus@xxxxxxx> wrote:
>     > So this ‘production ready’ CephFS for jewel seems a little not quite….
>     >
>     >
>     >
>     > Currently I have a single system mounting CephFS and merely
>     scp-ing data to
>     > it.
>     >
>     > The CephFS mount has 168 TB used, 345 TB / 514 TB avail.
>     >
>     >
>     >
>     > Every so often, I get a HEALTH_WARN message of mds0: Client failing to
>     > respond to cache pressure
> 
>     What client, what version?
> 
>     > Even if I stop the scp, it will not go away until I umount/remount the
>     > filesystem.
>     >
>     >
>     >
>     > For testing, I had the cephfs mounted on about 50 systems and when
>     updated
>     > started on the, I got all kinds of issues with it all.
> 
>     All kinds of issues...?  Need more specific bug reports than that to
>     fix things.
> 
>     John
> 
>     > I figured having updated run on a few systems would be a good ‘see
>     what
>     > happens’ if there is a fair amount of access to it.
>     >
>     >
>     >
>     > So, should I not be even considering using CephFS as a large
>     storage mount
>     > for a compute cluster? Is there a sweet spot for what CephFS would
>     be good
>     > for?
>     >
>     >
>     >
>     >
>     >
>     > Brian Andrus
>     >
>     > ITACS/Research Computing
>     >
>     > Naval Postgraduate School
>     >
>     > Monterey, California
>     >
>     > voice: 831-656-6238
>     >
>     >
>     >
>     >
>     >
>     >
>     > _______________________________________________
>     > ceph-users mailing list
>     > ceph-users@xxxxxxxxxxxxxx
>     > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>     >
>     _______________________________________________
>     ceph-users mailing list
>     ceph-users@xxxxxxxxxxxxxx
>     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux