Re: How's cephfs going?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Interesting. Any FUSE client data-points?

On 19 July 2017 at 20:21, Дмитрий Глушенок <glush@xxxxxxxxxx> wrote:
> RBD (via krbd) was in action at the same time - no problems.
>
> 19 июля 2017 г., в 12:54, Blair Bethwaite <blair.bethwaite@xxxxxxxxx>
> написал(а):
>
> It would be worthwhile repeating the first test (crashing/killing an
> OSD host) again with just plain rados clients (e.g. rados bench)
> and/or rbd. It's not clear whether your issue is specifically related
> to CephFS or actually something else.
>
> Cheers,
>
> On 19 July 2017 at 19:32, Дмитрий Глушенок <glush@xxxxxxxxxx> wrote:
>
> Hi,
>
> I can share negative test results (on Jewel 10.2.6). All tests were
> performed while actively writing to CephFS from single client (about 1300
> MB/sec). Cluster consists of 8 nodes, 8 OSD each (2 SSD for journals and
> metadata, 6 HDD RAID6 for data), MON/MDS are on dedicated nodes. 2 MDS at
> all, active/standby.
> - Crashing one node resulted in write hangs for 17 minutes. Repeating the
> test resulted in CephFS hangs forever.
> - Restarting active MDS resulted in successful failover to standby. Then,
> after standby became active and the restarted MDS became standby the new
> active was restarted. CephFS hanged for 12 minutes.
>
> P.S. Planning to repeat the tests again on 10.2.7 or higher
>
> 19 июля 2017 г., в 6:47, 许雪寒 <xuxuehan@xxxxxx> написал(а):
>
> Is there anyone else willing to share some usage information of cephfs?
> Could developers tell whether cephfs is a major effort in the whole ceph
> development?
>
> 发件人: 许雪寒
> 发送时间: 2017年7月17日 11:00
> 收件人: ceph-users@xxxxxxxxxxxxxx
> 主题: How's cephfs going?
>
> Hi, everyone.
>
> We intend to use cephfs of Jewel version, however, we don’t know its status.
> Is it production ready in Jewel? Does it still have lots of bugs? Is it a
> major effort of the current ceph development? And who are using cephfs now?
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> --
> Dmitry Glushenok
> Jet Infosystems
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>
> --
> Cheers,
> ~Blairo
>
>
> --
> Dmitry Glushenok
> Jet Infosystems
>



-- 
Cheers,
~Blairo
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux