I got it, thank you☺ 发件人: Дмитрий Глушенок [mailto:glush@xxxxxxxxxx] 发送时间: 2017年7月19日 18:20 收件人: 许雪寒 抄送: ceph-users@xxxxxxxxxxxxxx 主题: Re: How's cephfs going? You right. Forgot to mention that the client was using kernel 4.9.9. 19 июля 2017 г., в 12:36, 许雪寒 <xuxuehan@xxxxxx> написал(а): Hi, thanks for your sharing:-) So I guess you have not put cephfs into real production environment, and it's still in test phase, right? Thanks again:-) 发件人: Дмитрий Глушенок [mailto:glush@xxxxxxxxxx] 发送时间: 2017年7月19日 17:33 收件人: 许雪寒 抄送: ceph-users@xxxxxxxxxxxxxx 主题: Re: How's cephfs going? Hi, I can share negative test results (on Jewel 10.2.6). All tests were performed while actively writing to CephFS from single client (about 1300 MB/sec). Cluster consists of 8 nodes, 8 OSD each (2 SSD for journals and metadata, 6 HDD RAID6 for data), MON/MDS are on dedicated nodes. 2 MDS at all, active/standby. - Crashing one node resulted in write hangs for 17 minutes. Repeating the test resulted in CephFS hangs forever. - Restarting active MDS resulted in successful failover to standby. Then, after standby became active and the restarted MDS became standby the new active was restarted. CephFS hanged for 12 minutes. P.S. Planning to repeat the tests again on 10.2.7 or higher 19 июля 2017 г., в 6:47, 许雪寒 <xuxuehan@xxxxxx> написал(а): Is there anyone else willing to share some usage information of cephfs? Could developers tell whether cephfs is a major effort in the whole ceph development? 发件人: 许雪寒 发送时间: 2017年7月17日 11:00 收件人: ceph-users@xxxxxxxxxxxxxx 主题: How's cephfs going? Hi, everyone. We intend to use cephfs of Jewel version, however, we don’t know its status. Is it production ready in Jewel? Does it still have lots of bugs? Is it a major effort of the current ceph development? And who are using cephfs now? _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- Dmitry Glushenok Jet Infosystems -- Дмитрий Глушенок Инфосистемы Джет +7-910-453-2568 _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com