Re: "mount error 5 = Input/output error" with the CephFS file system from client node

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jun 14, 2016 at 4:29 AM, Rakesh Parkiti
<rakeshparkiti@xxxxxxxxxxx> wrote:
> Hello,
>
> Unable to mount the CephFS file system from client node with "mount error 5
> = Input/output error"
> MDS was installed on a separate node. Ceph Cluster health is OK and mds
> services are running. firewall was disabled across all the nodes in a
> cluster.
>
> -- Ceph Cluster Nodes (RHEL 7.2 version + Jewel version 10.2.1)
> -- Client Nodes - Ubuntu 14.04 LTS
>
> Admin Node:
> [root@Admin ceph]# ceph mds stat
> e34: 0/0/1 up
>
> Client Side:
> user@clientA2:/etc/ceph$ ceph fs ls --name client.admin
> name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
>
> user@clientA2:/etc/ceph$ sudo mount -t ceph 10.10.100.5:6789:/user
> /home/user/cephfs -o
> name=admin,secret=AQAQK1NXgupKIRAA9O7fKxadI/iIq/vPKLI9rw==
> mount error 5 = Input/output error
>
> Connection Establishment was successful to monitor node.
> $tail -f /var/log/syslog
> Jun 14 16:32:24 clientA2 kernel: [82270.155030] libceph: client134154 fsid
> 66c5f31c-1756-47ce-889d-960e0d99f37a
> Jun 14 16:32:24 clientA2 kernel: [82270.156726] libceph: mon0
> 10.10.100.5:6789 session established
>
> Able to check ceph health status from client node with client.admin
> keyring.:
>
> user@clientA2:/etc/ceph$ ceph -s --name client.admin
>     cluster 66c5f31c-1756-47ce-889d-960e0d99f37a
>      health HEALTH_OK
>      monmap e6: 3 mons at
> {siteAmon=10.10.100.5:6789/0,siteBmon=10.10.150.6:6789/0,siteCmon=10.10.200.7:6789/0}
>             election epoch 70, quorum 0,1,2 siteAmon,siteBmon,siteCmon
>       fsmap e34: 0/0/1 up
>      osdmap e1097: 19 osds: 19 up, 19 in
>             flags sortbitwise
>       pgmap v25719: 1286 pgs, 5 pools, 92160 kB data, 9 objects
>             3998 MB used, 4704 GB / 4708 GB avail
>                 1286 active+clean

According to this, you don't have an active MDS in your cluster. If it
really is running, you'll need to figure out why it's not connecting.
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux