Re: How's cephfs going?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

Can anyone share their experience with the  bulit-in FSCache support with or without CephFS?

Interested in knowing the following:
- Are you using FSCache in production environment?
- How large is your Ceph deployment?
- If with CephFS, how many Ceph clients are using FSCache
- which version of Ceph and Linux kernel 


thank you.
Anish Gupta




On Wednesday, July 19, 2017, 6:06:57 AM PDT, Donny Davis <donny@xxxxxxxxxxxxxx> wrote:


I had a corruption issue with the FUSE client on Jewel. I use CephFS for a samba share with a light load, and I was using the FUSE client. I had a power flap and didn't realize my UPS batteries had went bad so the MDS servers were cycled a couple times and some how the file system had become corrupted. I moved to the kernel client and after the FUSE experience I put it through horrible things. 

I had every client connected start copying over their user profiles, and then I started pulling and restarting MDS servers. I saw very few errors, and only blips in the copy processes. My experience with the kernel client has been very positive and I would say stable. Nothing replaces a solid backup copy of your data if you care about it. 

I am still currently on Jewel, and my CephFS is daily driven and I can barely notice that difference between it and the past setups I have had. 



On Wed, Jul 19, 2017 at 7:02 AM, Дмитрий Глушенок <glush@xxxxxxxxxx> wrote:
Unfortunately no. Using FUSE was discarded due to poor performance.

19 июля 2017 г., в 13:45, Blair Bethwaite <blair.bethwaite@xxxxxxxxx> написал(а):

Interesting. Any FUSE client data-points?

On 19 July 2017 at 20:21, Дмитрий Глушенок <glush@xxxxxxxxxx> wrote:
RBD (via krbd) was in action at the same time - no problems.

19 июля 2017 г., в 12:54, Blair Bethwaite <blair.bethwaite@xxxxxxxxx>
написал(а):

It would be worthwhile repeating the first test (crashing/killing an
OSD host) again with just plain rados clients (e.g. rados bench)
and/or rbd. It's not clear whether your issue is specifically related
to CephFS or actually something else.

Cheers,

On 19 July 2017 at 19:32, Дмитрий Глушенок <glush@xxxxxxxxxx> wrote:

Hi,

I can share negative test results (on Jewel 10.2.6). All tests were
performed while actively writing to CephFS from single client (about 1300
MB/sec). Cluster consists of 8 nodes, 8 OSD each (2 SSD for journals and
metadata, 6 HDD RAID6 for data), MON/MDS are on dedicated nodes. 2 MDS at
all, active/standby.
- Crashing one node resulted in write hangs for 17 minutes. Repeating the
test resulted in CephFS hangs forever.
- Restarting active MDS resulted in successful failover to standby. Then,
after standby became active and the restarted MDS became standby the new
active was restarted. CephFS hanged for 12 minutes.

P.S. Planning to repeat the tests again on 10.2.7 or higher

19 июля 2017 г., в 6:47, 许雪寒 <xuxuehan@xxxxxx> написал(а):

Is there anyone else willing to share some usage information of cephfs?
Could developers tell whether cephfs is a major effort in the whole ceph
development?

发件人: 许雪寒
发送时间: 2017年7月17日 11:00
收件人: ceph-users@xxxxxxxxxxxxxx
主题: How's cephfs going?

Hi, everyone.

We intend to use cephfs of Jewel version, however, we don’t know its status.
Is it production ready in Jewel? Does it still have lots of bugs? Is it a
major effort of the current ceph development? And who are using cephfs now?
______________________________ _________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/ listinfo.cgi/ceph-users-ceph. com


--
Dmitry Glushenok
Jet Infosystems


______________________________ _________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/ listinfo.cgi/ceph-users-ceph. com




--
Cheers,
~Blairo


--
Dmitry Glushenok
Jet Infosystems




--
Cheers,
~Blairo

--
Dmitry Glushenok
Jet Infosystems


______________________________ _________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/ listinfo.cgi/ceph-users-ceph. com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux