Re: Write operation to cephFS mount hangs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I send the logfile in the attachment. I can find no error messages or anything problematic…

I didn't see any log file attached to the email.

Another question: Is there a link between the VMs that fail to write to CephFS and the hypervisors? Are all failing clients on the same hypervisor(s)? If yes, have there been updates or any configuration changes?

Regards
Eugen


Zitat von Bödefeld Sabine <boedefeld@xxxxxxxxxxx>:

Hello Gregory



Yes I did that…

I’ve started the ceph-fuse client manually with ceph-fuse -d --debug-client=20 --debug-ms=1 --debug-monc=20 -m 192.168.1.161:6789 /mnt/cephfs

and then started the copy process. I send the logfile in the attachment. I can find no error messages or anything problematic…



Kind regards

Sabine



  _____

Dr. sc. techn. ETH
Sabine Bödefeld
Senior Consultant

ECOSPEED AG
Drahtzugstrasse 18
CH-8008 Zürich
Tel. +41 44 388 95 04
Fax: +41 44 388 95 09
Web: http://www.ecospeed.ch <http://www.ecospeed.ch/>

trusted-development-72x72



Von: Gregory Farnum [mailto:gfarnum@xxxxxxxxxx]
Gesendet: Donnerstag, 2. August 2018 09:36
An: Bödefeld Sabine
Cc: ceph-users@xxxxxxxxxxxxxx
Betreff: Re:  Write operation to cephFS mount hangs



Make sure you’ve gone through the suggestions at http://docs.ceph.com/docs/master/cephfs/troubleshooting/

On Thu, Aug 2, 2018 at 12:39 PM Bödefeld Sabine <boedefeld@xxxxxxxxxxx> wrote:

Hi Gregory



Yes they have the same key and permissions. ceph auth list on the mds server gives



client.admin

       key: <key>

        caps: [mds] allow *

        caps: [mon] allow *

        caps: [osd] allow *



On the client /etc/ceph/ceph.client.admin.keyring

[client.admin]

        key = <key>

        caps mds = "allow *"

        caps mon = "allow *"

        caps osd = "allow *"



I checked that the key is identical, also that the keyring is the same on both the clients that work and on the clients where the write operations fail.

Do you have any other suggestions?
Kind regards

Sabine




  _____


Dr. sc. techn. ETH
Sabine Bödefeld
Senior Consultant

ECOSPEED AG
Drahtzugstrasse 18 <https://maps.google.com/?q=Drahtzugstrasse+18+CH-8008+Z%C3%BCrich&entry=gmail&source=g> CH-8008 Zürich <https://maps.google.com/?q=Drahtzugstrasse+18+CH-8008+Z%C3%BCrich&entry=gmail&source=g>
Tel. +41 44 388 95 04
Fax: +41 44 388 95 09
Web: http://www.ecospeed.ch <http://www.ecospeed.ch/>

trusted-development-72x72



Von: Gregory Farnum [mailto:gfarnum@xxxxxxxxxx]
Gesendet: Mittwoch, 1. August 2018 06:10
An: Bödefeld Sabine
Cc: ceph-users@xxxxxxxxxxxxxx
Betreff: Re:  Write operation to cephFS mount hangs





On Tue, Jul 31, 2018 at 7:46 PM Bödefeld Sabine <boedefeld@xxxxxxxxxxx> wrote:

Hello,



we have a Ceph Cluster 10.2.10 on VMs with Ubuntu 16.04 using Xen as the hypervisor. We use CephFS and the clients use ceph-fuse to access the files.

Some of the ceph-fuse clients hang on write operations to the cephFS. On copying a file to the cephFS, the file is created but it's empty and the write operation hangs forever. Ceph-fuse version is 10.2.9.



Sounds like the client has the MDS permissions required to update the CephFS metadata hierarchy, but lacks permission to write to the RADOS pools which actually store the file data. What permissions do the clients have? Have you checked with "ceph auth list" or similar to make sure they all have the same CephX capabilities?

-Greg



In the logfile of the mds there are no error messages. Also, ceph health returns HEALTH_OK.

ceph daemon mds.eco61 session ls reports no problems (if I interpret correctly):

   {

        "id": 64396,

        "num_leases": 2,

        "num_caps": 32,

        "state": "open",

        "replay_requests": 0,

        "completed_requests": 1,

        "reconnecting": false,

        "inst": "client.64396 192.168.1.179:0\/980852091",

        "client_metadata": {

            "ceph_sha1": "2ee413f77150c0f375ff6f10edd6c8f9c7d060d0",

"ceph_version": "ceph version 10.2.9 (2ee413f77150c0f375ff6f10edd6c8f9c7d060d0)",

            "entity_id": "admin",

            "hostname": "eco79",

            "mount_point": "\/mnt\/cephfs",

            "root": "\/"

        }

    },



Does anyone have an idea where the problem lies? Any help would be greatly appreciated.

Thanks very much,

Kind regards

Sabine

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux