答复: cephfs not writeable on a few clients

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The working client is running in user space (probably ceph-fuse), while the non-working client is using kernel mount


发件人: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] 代表 Steininger, Herbert
发送时间: 2017年4月25日 16:44
收件人: ceph-users@xxxxxxxxxxxxxx
主题:  cephfs not writeable on a few clients

Hi,

I’m fairly new to cephfs, on my new job there is a cephfs-cluster that I have to administer.

The problem is, I can’t write from some clients to the cephfs-mount.
When I try from the specific clients I get in the Logfile:
>Apr 24 13:14:00 cuda002 kernel: ceph: mds0 hung
>Apr 24 13:14:00 cuda002 kernel: ceph: mds0 caps stale
>Apr 24 13:14:00 cuda002 kernel: ceph: mds0 came back
>Apr 24 13:14:00 cuda002 kernel: ceph: mds0 caps still stale

Restarting the mds doesn’t make any difference.

>ceph –s
Says:
[root@cuda001:/var/log/ceph]# ceph -s
    cluster cde1487e-f930-417a-9403-28e9ebf406b8
     health HEALTH_OK
     monmap e6: 1 mons at {cephcontrol=172.22.12.241:6789/0}
            election epoch 1, quorum 0 cephcontrol
     mdsmap e1574: 1/1/1 up {0=A1214-2950-01=up:active}
     osdmap e9571: 6 osds: 6 up, 6 in
      pgmap v11438317: 320 pgs, 3 pools, 20427 GB data, 7102 kobjects
            62100 GB used, 52968 GB / 112 TB avail
                 319 active+clean
                   1 active+clean+scrubbing+deep


So everything should be right, but it is not working.

The only thing I found that is different to the other hosts is when I do:
> ceph daemon mds.A1214-2950-01 session ls

On the working clients I get:
    {
        "id": 670317,
        "num_leases": 0,
        "num_caps": 35386,
        "state": "open",
        "replay_requests": 0,
        "reconnecting": false,
        "inst": "client.670317 172.22.7.52:0\/4290071627",
        "client_metadata": {
            "ceph_sha1": "mySHA1-ID",
            "ceph_version": "ceph version 0.94.9 (mySHA1-ID)",
            "entity_id": "admin",
            "hostname": "PE8",
            "mount_point": "\/cephfs01"
        }


On the non-working clients it looks like:
    {
        "id": 670648,
        "num_leases": 0,
        "num_caps": 60,
        "state": "open",
        "replay_requests": 0,
        "reconnecting": false,
        "inst": "client.670648 172.22.20.5:0\/2770536198",
        "client_metadata": {
            "entity_id": "cephfs",
            "hostname": "slurmgate",
            "kernel_version": "3.10.0-514.16.1.el7.x86_64"
        }

The biggest difference is,
There are no ‘ceph_sha1’-Entrys, no ‘ceph_version’-Entrys also no ‘mount_point’ and the entity-id is also different.

Could someone please shed some light upon me what I did wrong?
The Guy who installed it, is no longer here and there is also no documentation.
I just try to mount it per automount/autofs.

If you need more info, just let me know.

Thanks in Advance,
Steininger Herbert

-------------------------------------------------------------------------------------------------------------------------------------
本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出
的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、
或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本
邮件!
This e-mail and its attachments contain confidential information from H3C, which is
intended only for the person or entity whose address is listed above. Any use of the
information contained herein in any way (including, but not limited to, total or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender
by phone or email immediately and delete it!
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux