Re: "failed to open ino"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 27 Nov 2017 1:06 p.m., "Jens-U. Mozdzen" <jmozdzen@xxxxxx> wrote:
Hi David,

Zitat von David C <dcsysengineer@xxxxxxxxx>:

Hi Jens

We also see these messages quite frequently, mainly the "replicating
dir...". Only seen "failed to open ino" a few times so didn't do any real
investigation. Our set up is very similar to yours, 12.2.1, active/standby
MDS and exporting cephfs through KNFS (hoping to replace with Ganesha
soon).

been there, done that - using Ganesha more than doubled the run-time of our jobs, while with knfsd, the run-time is about the same for CephFS-based and "local disk"-based files. But YMMV, so if you see speeds with Ganesha that are similar to knfsd, please report back with details...

I'd be interested to know if you tested Ganesha over a cephfs kernel mount (ie using the VFS fsal) or if you used the Ceph fsal. Also the server and client versions you tested. 

Prior to Luminous, Ganesha writes were terrible due to a bug with fsync calls in the mds code. The fix went into the mds and client code. If you're doing Ganesha over the top of the kernel mount you'll need a pretty recent kernel to see the write improvements. 

From my limited Ganesha testing so far, reads are better when exporting the kernel mount, writes are much better with the Ceph fsal. But that's expected for me as I'm using the CentOS kernel. I was hoping the aforementioned fix would make it into the rhel 7.4 kernel but doesn't look like it has. 

I currently use async on my nfs exports as writes are really poor otherwise. I'm comfortable with the risks that entails. 

 




Interestingly, the paths reported in "replicating dir" are usually
dirs exported through Samba (generally Windows profile dirs). Samba runs
really well for us and there doesn't seem to be any impact on users. I
expect we wouldn't see these messages if running active/active MDS but I'm
still a bit cautious about implementing that (am I being overly cautious I
wonder?).

>From what I can see, it would have to be A/A/P, since MDS demands at least one stand-by.

That's news to me. Is it possible you still had standby config in your ceph.conf? 


Regards,
Jens


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux