Re: Write operation to cephFS mount hangs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Make sure you’ve gone through the suggestions at http://docs.ceph.com/docs/master/cephfs/troubleshooting/
On Thu, Aug 2, 2018 at 12:39 PM Bödefeld Sabine <boedefeld@xxxxxxxxxxx> wrote:

Hi Gregory

 

Yes they have the same key and permissions. ceph auth list on the mds server gives

 

client.admin

       key: <key>

        caps: [mds] allow *

        caps: [mon] allow *

        caps: [osd] allow *

 

On the client /etc/ceph/ceph.client.admin.keyring

[client.admin]

        key = <key>

        caps mds = "allow *"

        caps mon = "allow *"

        caps osd = "allow *"

 

I checked that the key is identical, also that the keyring is the same on both the clients that work and on the clients where the write operations fail.

Do you have any other suggestions?
Kind regards

Sabine

 


Dr. sc. techn. ETH
Sabine Bödefeld
Senior Consultant


ECOSPEED AG
Drahtzugstrasse 18
CH-8008 Zürich
Tel. +41 44 388 95 04
Fax: +41 44 388 95 09
Web: http://www.ecospeed.ch

trusted-development-72x72

 

Von: Gregory Farnum [mailto:gfarnum@xxxxxxxxxx]
Gesendet: Mittwoch, 1. August 2018 06:10
An: Bödefeld Sabine
Cc: ceph-users@xxxxxxxxxxxxxx
Betreff: Re: Write operation to cephFS mount hangs

 

 

On Tue, Jul 31, 2018 at 7:46 PM Bödefeld Sabine <boedefeld@xxxxxxxxxxx> wrote:

Hello,

 

we have a Ceph Cluster 10.2.10 on VMs with Ubuntu 16.04 using Xen as the hypervisor. We use CephFS and the clients use ceph-fuse to access the files.

Some of the ceph-fuse clients hang on write operations to the cephFS. On copying a file to the cephFS, the file is created but it's empty and the write operation hangs forever. Ceph-fuse version is 10.2.9.

 

Sounds like the client has the MDS permissions required to update the CephFS metadata hierarchy, but lacks permission to write to the RADOS pools which actually store the file data. What permissions do the clients have? Have you checked with "ceph auth list" or similar to make sure they all have the same CephX capabilities?

-Greg

 

In the logfile of the mds there are no error messages. Also, ceph health returns HEALTH_OK.

ceph daemon mds.eco61 session ls reports no problems (if I interpret correctly):

   {

        "id": 64396,

        "num_leases": 2,

        "num_caps": 32,

        "state": "open",

        "replay_requests": 0,

        "completed_requests": 1,

        "reconnecting": false,

        "inst": "client.64396 192.168.1.179:0\/980852091",

        "client_metadata": {

            "ceph_sha1": "2ee413f77150c0f375ff6f10edd6c8f9c7d060d0",

            "ceph_version": "ceph version 10.2.9 (2ee413f77150c0f375ff6f10edd6c8f9c7d060d0)",

            "entity_id": "admin",

            "hostname": "eco79",

            "mount_point": "\/mnt\/cephfs",

            "root": "\/"

        }

    },

 

Does anyone have an idea where the problem lies? Any help would be greatly appreciated.

Thanks very much,

Kind regards

Sabine

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

JPEG image

JPEG image

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux