Re: Is CephFS ready for production?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



​I think this is what I seen in 2013 Hongkong summit. At least in Ceph Enterprise version.​

Best Regards
-- Ray

On Sat, Apr 25, 2015 at 12:36 AM, Gregory Farnum <greg@xxxxxxxxxxx> wrote:
I think the VMWare plugin was going to be contracted out by the business people, and it was never going to be upstream anyway -- I've not heard anything since then but you'd need to ask them I think.
-Greg

On Fri, Apr 24, 2015 at 7:17 AM Marc <mail@xxxxxxxxxx> wrote:
On 22/04/2015 16:04, Gregory Farnum wrote:
> On Tue, Apr 21, 2015 at 9:53 PM, Mohamed Pakkeer <mdfakkeer@xxxxxxxxx> wrote:
>> Hi sage,
>>
>> When can we expect the fully functional fsck for cephfs?. Can we get at next
>> major release?. Is there any roadmap or time frame for the fully functional
>> fsck release?
> We're working on it as fast as we can, and it'll be done when it's
> done. ;) More seriously, I'm still holding out a waning hope that
> we'll have the "forward scrub" portion ready for Infernalis and then
> we'll see how long it takes to assemble a working repair tool from
> that.
>
> On Wed, Apr 22, 2015 at 2:20 AM, Marc <mail@xxxxxxxxxx> wrote:
>> Hi everyone,
>>
>> I am curious about the current state of the roadmap as well. Alongside the
>> already asked question Re vmware support, where are we at with cephfs'
>> multiMDS stability and dynamic subtree partitioning?
> Zheng has fixed a ton of bugs in these areas over the last year, but
> both features are farther down the roadmap since we don't think we
> need them for the earliest production users.
> -Greg

Thanks for letting us know! Due to the RedHat acquisition the ICE
roadmap seems to have disappeared. Is a vmware driver still being worked
on? With vmware being closed source and all, I imagine this lies mostly
within the domain of VMware Inc., correct? Having iSCSI proxies as
mediators is rather clunky...

(And yes I am actively working on trying to get the interested parties
to strongly look into KVM, but they have become very comfortable with
VMware vsphere enterprise plus...)


Thanks and have a nice weekend!
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Attachment: 2013-11-05 165007.jpg
Description: JPEG image

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux