Re: the state of cephfs in giant

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 13 Oct 2014, Wido den Hollander wrote:
> On 13-10-14 20:16, Sage Weil wrote:
> > With Giant, we are at a point where we would ask that everyone try
> > things out for any non-production workloads. We are very interested in
> > feedback around stability, usability, feature gaps, and performance. We
> > recommend:
> 
> A question to clarify this for anybody out there. Do you think it is
> safe to run CephFS on a cluster which is doing production RBD/RGW I/O?
> 
> Will it be the MDS/CephFS part which breaks or are there potential issue
> due to OSD classes which might cause OSDs to crash due to bugs in CephFS?
> 
> I know you can't fully rule it out, but it would be useful to have this
> clarified.

I can't think of any issues that this would cause with the OSDs.  CephFS 
isn't using any rados classes; just core rados functionality that RGW also 
uses.

On the monitor side, there is a reasonably probability of triggering a 
CephFS related health warning.  There is also the potential for code in 
the MDSMonitor.cc code to crash the mon, but I don't think we've seen any 
problems there any time recently.

So, probably safe.

sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux