Re: "failed to open ino"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi David,

Zitat von David C <dcsysengineer@xxxxxxxxx>:
On 27 Nov 2017 1:06 p.m., "Jens-U. Mozdzen" <jmozdzen@xxxxxx> wrote:

Hi David,

Zitat von David C <dcsysengineer@xxxxxxxxx>:

Hi Jens

We also see these messages quite frequently, mainly the "replicating
dir...". Only seen "failed to open ino" a few times so didn't do any real
investigation. Our set up is very similar to yours, 12.2.1, active/standby
MDS and exporting cephfs through KNFS (hoping to replace with Ganesha
soon).


been there, done that - using Ganesha more than doubled the run-time of our
jobs, while with knfsd, the run-time is about the same for CephFS-based and
"local disk"-based files. But YMMV, so if you see speeds with Ganesha that
are similar to knfsd, please report back with details...


I'd be interested to know if you tested Ganesha over a cephfs kernel mount
(ie using the VFS fsal) or if you used the Ceph fsal. Also the server and
client versions you tested.

I had tested Ganesha only via the Ceph FSAL. Our Ceph nodes (including the one used as a Ganesha server) are running ceph-12.2.1+git.1507910930.aea79b8b7a on OpenSUSE 42.3, SUSE's kernel 4.4.76-1-default (which has a number of back-ports in it), Ganesha is at version nfs-ganesha-2.5.2.0+git.1504275777.a9d23b98f.

The NFS clients are a broad mix of current and older systems.

Prior to Luminous, Ganesha writes were terrible due to a bug with fsync
calls in the mds code. The fix went into the mds and client code. If you're
doing Ganesha over the top of the kernel mount you'll need a pretty recent
kernel to see the write improvements.

As we were testing the Ceph FSAL, this should not be the cause.

From my limited Ganesha testing so far, reads are better when exporting the
kernel mount, writes are much better with the Ceph fsal. But that's
expected for me as I'm using the CentOS kernel. I was hoping the
aforementioned fix would make it into the rhel 7.4 kernel but doesn't look
like it has.

When exporting the kernel-mounted CephFS via kernel nfsd, we see similar speeds to serving the same set of files from a local bcache'd RAID1 array on SAS disks. This is for a mix of reads and writes, mostly small files (compile jobs, some packaging).

From what I can see, it would have to be A/A/P, since MDS demands at least
one stand-by.


That's news to me.

From http://docs.ceph.com/docs/master/cephfs/multimds/ :

"Each CephFS filesystem has a max_mds setting, which controls how many ranks will be created. The actual number of ranks in the filesystem will only be increased if a spare daemon is available to take on the new rank. For example, if there is only one MDS daemon running, and max_mds is set to two, no second rank will be created."

Might well be I was mis-reading this... I had first read it to mean that a spare daemon needs to be available *while running* A/A, but the example sounds like the spare is required when *switching to* A/A.

Is it possible you still had standby config in your ceph.conf?

Not sure what you're asking for, is this related to active/active or to our Ganesha tests? We have not yet tried to switch to A/A, so our config actually contains standby parameters.

Regards,
Jens

--
Jens-U. Mozdzen                         voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG      fax     : +49-40-559 51 77
Postfach 61 03 15                       mobile  : +49-179-4 98 21 98
D-22423 Hamburg                         e-mail  : jmozdzen@xxxxxx

        Vorsitzende des Aufsichtsrates: Angelika Torlée-Mozdzen
          Sitz und Registergericht: Hamburg, HRB 90934
                  Vorstand: Jens-U. Mozdzen
                   USt-IdNr. DE 814 013 983

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux