Odd, you only got 2 mons and 0 osds? Your cluster build looks incomplete.
From: Dai Xiang <xiang.dai@xxxxxxxxxxx>
Sent: Tuesday, November 14, 2017 6:12:27 PM
To: Linh Vu
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: mount failed since failed to load ceph kernel module
Sent: Tuesday, November 14, 2017 6:12:27 PM
To: Linh Vu
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: mount failed since failed to load ceph kernel module
On Tue, Nov 14, 2017 at 02:24:06AM +0000, Linh Vu wrote:
> Your kernel is way too old for CephFS Luminous. I'd use one of the newer kernels from http://elrepo.org. :) We're on 4.12 here on RHEL 7.4.
I had updated kernel version to newest:
[root@d32f3a7b6eb8 ~]$ uname -a
Linux d32f3a7b6eb8 4.14.0-1.el7.elrepo.x86_64 #1 SMP Sun Nov 12 20:21:04 EST 2017 x86_64 x86_64 x86_64 GNU/Linux
[root@d32f3a7b6eb8 ~]$ cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
But still failed:
[root@d32f3a7b6eb8 ~]$ /bin/mount 172.17.0.4,172.17.0.5:/ /cephfs -t ceph -o name=admin,secretfile=/etc/ceph/admin.secret -v
failed to load ceph kernel module (1)
parsing options: rw,name=admin,secretfile=/etc/ceph/admin.secret
mount error 2 = No such file or directory
[root@d32f3a7b6eb8 ~]$ ll /cephfs
total 0
[root@d32f3a7b6eb8 ~]$ ceph -s
cluster:
id: a5f1d744-35eb-4e1b-a7c7-cb9871ec559d
health: HEALTH_WARN
Reduced data availability: 128 pgs inactive
Degraded data redundancy: 128 pgs unclean
services:
mon: 2 daemons, quorum d32f3a7b6eb8,1d22f2d81028
mgr: d32f3a7b6eb8(active), standbys: 1d22f2d81028
mds: cephfs-1/1/1 up {0=1d22f2d81028=up:creating}, 1 up:standby
osd: 0 osds: 0 up, 0 in
data:
pools: 2 pools, 128 pgs
objects: 0 objects, 0 bytes
usage: 0 kB used, 0 kB / 0 kB avail
pgs: 100.000% pgs unknown
128 unknown
[root@d32f3a7b6eb8 ~]$ lsmod | grep ceph
ceph 372736 0
libceph 315392 1 ceph
fscache 65536 3 ceph,nfsv4,nfs
libcrc32c 16384 5 libceph,nf_conntrack,xfs,dm_persistent_data,nf_nat
--
Best Regards
Dai Xiang
>
>
> Hi!
>
> I got a confused issue in docker as below:
>
> After install ceph successfully, i want to mount cephfs but failed:
>
> [root@dbffa72704e4 ~]$ /bin/mount https://protect-au.mimecast.com/s/44GEBkS0gw3Xig?domain=172.17.0.4<https://protect-au.mimecast.com/s/drx4BDiVQ0kdFm?domain=172.17.0.4> /cephfs -t ceph -o name=admin,secretfile=/etc/ceph/admin.secret -v
> failed to load ceph kernel module (1)
> parsing options: rw,name=admin,secretfile=/etc/ceph/admin.secret
> mount error 5 = Input/output error
>
> But ceph related kernel modules have existed:
>
> [root@dbffa72704e4 ~]$ lsmod | grep ceph
> ceph 327687 0
> libceph 287066 1 ceph
> dns_resolver 13140 2 nfsv4,libceph
> libcrc32c 12644 3 xfs,libceph,dm_persistent_data
>
> Check the ceph state(i only set data disk for osd):
>
> [root@dbffa72704e4 ~]$ ceph -s
> cluster:
> id: 20f51975-303e-446f-903f-04e1feaff7d0
> health: HEALTH_WARN
> Reduced data availability: 128 pgs inactive
> Degraded data redundancy: 128 pgs unclean
>
> services:
> mon: 2 daemons, quorum dbffa72704e4,5807d12f920e
> mgr: dbffa72704e4(active), standbys: 5807d12f920e
> mds: cephfs-1/1/1 up {0=5807d12f920e=up:creating}, 1 up:standby
> osd: 0 osds: 0 up, 0 in
>
> data:
> pools: 2 pools, 128 pgs
> objects: 0 objects, 0 bytes
> usage: 0 kB used, 0 kB / 0 kB avail
> pgs: 100.000% pgs unknown
> 128 unknown
>
> [root@dbffa72704e4 ~]$ ceph version
> ceph version 12.2.1 (3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e) luminous (stable)
>
> My container is based on centos:centos7.2.1511, kernel is 3e0728877e22 3.10.0-514.el7.x86_64.
>
> I saw some ceph related images on docker hub so that i think above
> operation is ok, did i miss something important?
>
> --
> Best Regards
> Dai Xiang
> Your kernel is way too old for CephFS Luminous. I'd use one of the newer kernels from http://elrepo.org. :) We're on 4.12 here on RHEL 7.4.
I had updated kernel version to newest:
[root@d32f3a7b6eb8 ~]$ uname -a
Linux d32f3a7b6eb8 4.14.0-1.el7.elrepo.x86_64 #1 SMP Sun Nov 12 20:21:04 EST 2017 x86_64 x86_64 x86_64 GNU/Linux
[root@d32f3a7b6eb8 ~]$ cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
But still failed:
[root@d32f3a7b6eb8 ~]$ /bin/mount 172.17.0.4,172.17.0.5:/ /cephfs -t ceph -o name=admin,secretfile=/etc/ceph/admin.secret -v
failed to load ceph kernel module (1)
parsing options: rw,name=admin,secretfile=/etc/ceph/admin.secret
mount error 2 = No such file or directory
[root@d32f3a7b6eb8 ~]$ ll /cephfs
total 0
[root@d32f3a7b6eb8 ~]$ ceph -s
cluster:
id: a5f1d744-35eb-4e1b-a7c7-cb9871ec559d
health: HEALTH_WARN
Reduced data availability: 128 pgs inactive
Degraded data redundancy: 128 pgs unclean
services:
mon: 2 daemons, quorum d32f3a7b6eb8,1d22f2d81028
mgr: d32f3a7b6eb8(active), standbys: 1d22f2d81028
mds: cephfs-1/1/1 up {0=1d22f2d81028=up:creating}, 1 up:standby
osd: 0 osds: 0 up, 0 in
data:
pools: 2 pools, 128 pgs
objects: 0 objects, 0 bytes
usage: 0 kB used, 0 kB / 0 kB avail
pgs: 100.000% pgs unknown
128 unknown
[root@d32f3a7b6eb8 ~]$ lsmod | grep ceph
ceph 372736 0
libceph 315392 1 ceph
fscache 65536 3 ceph,nfsv4,nfs
libcrc32c 16384 5 libceph,nf_conntrack,xfs,dm_persistent_data,nf_nat
--
Best Regards
Dai Xiang
>
>
> Hi!
>
> I got a confused issue in docker as below:
>
> After install ceph successfully, i want to mount cephfs but failed:
>
> [root@dbffa72704e4 ~]$ /bin/mount https://protect-au.mimecast.com/s/44GEBkS0gw3Xig?domain=172.17.0.4<https://protect-au.mimecast.com/s/drx4BDiVQ0kdFm?domain=172.17.0.4> /cephfs -t ceph -o name=admin,secretfile=/etc/ceph/admin.secret -v
> failed to load ceph kernel module (1)
> parsing options: rw,name=admin,secretfile=/etc/ceph/admin.secret
> mount error 5 = Input/output error
>
> But ceph related kernel modules have existed:
>
> [root@dbffa72704e4 ~]$ lsmod | grep ceph
> ceph 327687 0
> libceph 287066 1 ceph
> dns_resolver 13140 2 nfsv4,libceph
> libcrc32c 12644 3 xfs,libceph,dm_persistent_data
>
> Check the ceph state(i only set data disk for osd):
>
> [root@dbffa72704e4 ~]$ ceph -s
> cluster:
> id: 20f51975-303e-446f-903f-04e1feaff7d0
> health: HEALTH_WARN
> Reduced data availability: 128 pgs inactive
> Degraded data redundancy: 128 pgs unclean
>
> services:
> mon: 2 daemons, quorum dbffa72704e4,5807d12f920e
> mgr: dbffa72704e4(active), standbys: 5807d12f920e
> mds: cephfs-1/1/1 up {0=5807d12f920e=up:creating}, 1 up:standby
> osd: 0 osds: 0 up, 0 in
>
> data:
> pools: 2 pools, 128 pgs
> objects: 0 objects, 0 bytes
> usage: 0 kB used, 0 kB / 0 kB avail
> pgs: 100.000% pgs unknown
> 128 unknown
>
> [root@dbffa72704e4 ~]$ ceph version
> ceph version 12.2.1 (3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e) luminous (stable)
>
> My container is based on centos:centos7.2.1511, kernel is 3e0728877e22 3.10.0-514.el7.x86_64.
>
> I saw some ceph related images on docker hub so that i think above
> operation is ok, did i miss something important?
>
> --
> Best Regards
> Dai Xiang
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com