Re: mount failed since failed to load ceph kernel module

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Nov 14, 2017 at 11:12:47AM +0100, Iban Cabrillo wrote:
> HI,
>    You should do something like #ceph osd in osd.${num}:
>    But If this is your tree, I do not see any osd available at this moment
> in your cluster, should be something similar to this xesample:
> 
> ID CLASS WEIGHT   TYPE NAME                STATUS REWEIGHT PRI-AFF
> -1       58.21509 root default
> 
> -2       29.12000         host cephosd01
>  1   hdd  3.64000             osd.1            up  1.00000 1.00000
> ......
> -3       29.09509         host cephosd02
>  0   hdd  3.63689             osd.0            up  1.00000 1.00000
> ......
> 
> Please have a look at the guide:
> http://docs.ceph.com/docs/luminous/rados/deployment/ceph-deploy-osd/


I install ceph in docker in fact, since docker doesn't support create
partition during running, i use `parted` to create then start
container to use ceph create again. The debug log is all right,
anywhere else i can get detail info?
> 
> 
> Regards, I
> 
> 2017-11-14 10:58 GMT+01:00 Dai Xiang <xiang.dai@xxxxxxxxxxx>:
> 
> > On Tue, Nov 14, 2017 at 10:52:00AM +0100, Iban Cabrillo wrote:
> > > Hi Dai Xiang,
> > >   There is no OSD available at this moment in your cluste, then you can't
> > > read/write or mount anything, maybe the osds are configured but they are
> > > out, please could you paste the "#ceph osd tree " command
> > > to see your osd status ?
> >
> > ID CLASS WEIGHT TYPE NAME    STATUS REWEIGHT PRI-AFF
> > -1            0 root default
> >
> > It is out indeed, but i really do not know how to fix it.
> >
> > --
> > Best Regards
> > Dai Xiang
> > >
> > > Regards, I
> > >
> > >
> > > 2017-11-14 10:39 GMT+01:00 Dai Xiang <xiang.dai@xxxxxxxxxxx>:
> > >
> > > > On Tue, Nov 14, 2017 at 09:21:56AM +0000, Linh Vu wrote:
> > > > > Odd, you only got 2 mons and 0 osds? Your cluster build looks
> > incomplete.
> > > >
> > > > But from the log, osd seems normal:
> > > > [172.17.0.4][INFO  ] checking OSD status...
> > > > [172.17.0.4][DEBUG ] find the location of an executable
> > > > [172.17.0.4][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat
> > > > --format=json
> > > > [ceph_deploy.osd][DEBUG ] Host 172.17.0.4 is now ready for osd use.
> > > > ...
> > > >
> > > > [172.17.0.5][INFO  ] Running command: systemctl enable ceph.target
> > > > [172.17.0.5][INFO  ] checking OSD status...
> > > > [172.17.0.5][DEBUG ] find the location of an executable
> > > > [172.17.0.5][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat
> > > > --format=json
> > > > [ceph_deploy.osd][DEBUG ] Host 172.17.0.5 is now ready for osd use.
> > > >
> > > > --
> > > > Best Regards
> > > > Dai Xiang
> > > > >
> > > > > Get Outlook for Android<https://aka.ms/ghei36>
> > > > >
> > > > > ________________________________
> > > > > From: Dai Xiang <xiang.dai@xxxxxxxxxxx>
> > > > > Sent: Tuesday, November 14, 2017 6:12:27 PM
> > > > > To: Linh Vu
> > > > > Cc: ceph-users@xxxxxxxxxxxxxx
> > > > > Subject: Re: mount failed since failed to load ceph kernel module
> > > > >
> > > > > On Tue, Nov 14, 2017 at 02:24:06AM +0000, Linh Vu wrote:
> > > > > > Your kernel is way too old for CephFS Luminous. I'd use one of the
> > > > newer kernels from http://elrepo.org. :) We're on 4.12 here on RHEL
> > 7.4.
> > > > >
> > > > > I had updated kernel version to newest:
> > > > > [root@d32f3a7b6eb8 ~]$ uname -a
> > > > > Linux d32f3a7b6eb8 4.14.0-1.el7.elrepo.x86_64 #1 SMP Sun Nov 12
> > 20:21:04
> > > > EST 2017 x86_64 x86_64 x86_64 GNU/Linux
> > > > > [root@d32f3a7b6eb8 ~]$ cat /etc/redhat-release
> > > > > CentOS Linux release 7.2.1511 (Core)
> > > > >
> > > > > But still failed:
> > > > > [root@d32f3a7b6eb8 ~]$ /bin/mount 172.17.0.4,172.17.0.5:/ /cephfs -t
> > > > ceph -o name=admin,secretfile=/etc/ceph/admin.secret -v
> > > > > failed to load ceph kernel module (1)
> > > > > parsing options: rw,name=admin,secretfile=/etc/ceph/admin.secret
> > > > > mount error 2 = No such file or directory
> > > > > [root@d32f3a7b6eb8 ~]$ ll /cephfs
> > > > > total 0
> > > > >
> > > > > [root@d32f3a7b6eb8 ~]$ ceph -s
> > > > >   cluster:
> > > > >     id:     a5f1d744-35eb-4e1b-a7c7-cb9871ec559d
> > > > >     health: HEALTH_WARN
> > > > >             Reduced data availability: 128 pgs inactive
> > > > >             Degraded data redundancy: 128 pgs unclean
> > > > >
> > > > >   services:
> > > > >     mon: 2 daemons, quorum d32f3a7b6eb8,1d22f2d81028
> > > > >     mgr: d32f3a7b6eb8(active), standbys: 1d22f2d81028
> > > > >     mds: cephfs-1/1/1 up  {0=1d22f2d81028=up:creating}, 1 up:standby
> > > > >     osd: 0 osds: 0 up, 0 in
> > > > >
> > > > >   data:
> > > > >     pools:   2 pools, 128 pgs
> > > > >     objects: 0 objects, 0 bytes
> > > > >     usage:   0 kB used, 0 kB / 0 kB avail
> > > > >     pgs:     100.000% pgs unknown
> > > > >              128 unknown
> > > > >
> > > > > [root@d32f3a7b6eb8 ~]$ lsmod | grep ceph
> > > > > ceph                  372736  0
> > > > > libceph               315392  1 ceph
> > > > > fscache                65536  3 ceph,nfsv4,nfs
> > > > > libcrc32c              16384  5 libceph,nf_conntrack,xfs,dm_
> > > > persistent_data,nf_nat
> > > > >
> > > > >
> > > > > --
> > > > > Best Regards
> > > > > Dai Xiang
> > > > > >
> > > > > >
> > > > > > Hi!
> > > > > >
> > > > > > I got a confused issue in docker as below:
> > > > > >
> > > > > > After install ceph successfully, i want to mount cephfs but failed:
> > > > > >
> > > > > > [root@dbffa72704e4 ~]$ /bin/mount http://172.17.0.4:/<http://
> > > > 172.17.0.4:/> /cephfs -t ceph -o name=admin,secretfile=/etc/
> > ceph/admin.secret
> > > > -v
> > > > > > failed to load ceph kernel module (1)
> > > > > > parsing options: rw,name=admin,secretfile=/etc/ceph/admin.secret
> > > > > > mount error 5 = Input/output error
> > > > > >
> > > > > > But ceph related kernel modules have existed:
> > > > > >
> > > > > > [root@dbffa72704e4 ~]$ lsmod | grep ceph
> > > > > > ceph                  327687  0
> > > > > > libceph               287066  1 ceph
> > > > > > dns_resolver           13140  2 nfsv4,libceph
> > > > > > libcrc32c              12644  3 xfs,libceph,dm_persistent_data
> > > > > >
> > > > > > Check the ceph state(i only set data disk for osd):
> > > > > >
> > > > > > [root@dbffa72704e4 ~]$ ceph -s
> > > > > >   cluster:
> > > > > >     id:     20f51975-303e-446f-903f-04e1feaff7d0
> > > > > >     health: HEALTH_WARN
> > > > > >             Reduced data availability: 128 pgs inactive
> > > > > >             Degraded data redundancy: 128 pgs unclean
> > > > > >
> > > > > >   services:
> > > > > >     mon: 2 daemons, quorum dbffa72704e4,5807d12f920e
> > > > > >     mgr: dbffa72704e4(active), standbys: 5807d12f920e
> > > > > >     mds: cephfs-1/1/1 up  {0=5807d12f920e=up:creating}, 1
> > up:standby
> > > > > >     osd: 0 osds: 0 up, 0 in
> > > > > >
> > > > > >   data:
> > > > > >     pools:   2 pools, 128 pgs
> > > > > >     objects: 0 objects, 0 bytes
> > > > > >     usage:   0 kB used, 0 kB / 0 kB avail
> > > > > >     pgs:     100.000% pgs unknown
> > > > > >              128 unknown
> > > > > >
> > > > > > [root@dbffa72704e4 ~]$ ceph version
> > > > > > ceph version 12.2.1 (3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e)
> > > > luminous (stable)
> > > > > >
> > > > > > My container is based on centos:centos7.2.1511, kernel is
> > 3e0728877e22
> > > > 3.10.0-514.el7.x86_64.
> > > > > >
> > > > > > I saw some ceph related images on docker hub so that i think above
> > > > > > operation is ok, did i miss something important?
> > > > > >
> > > > > > --
> > > > > > Best Regards
> > > > > > Dai Xiang
> > > > >
> > > >
> > > > _______________________________________________
> > > > ceph-users mailing list
> > > > ceph-users@xxxxxxxxxxxxxx
> > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > > >
> > >
> > >
> > >
> > > --
> > > ############################################################
> > ################
> > > Iban Cabrillo Bartolome
> > > Instituto de Fisica de Cantabria (IFCA)
> > > Santander, Spain
> > > Tel: +34942200969
> > > PGP PUBLIC KEY:
> > > http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC
> > > ############################################################
> > ################
> > > Bertrand Russell:*"El problema con el mundo es que los estúpidos están
> > > seguros de todo y los inteligentes están **llenos de dudas*"
> >
> >
> 
> 
> -- 
> ############################################################################
> Iban Cabrillo Bartolome
> Instituto de Fisica de Cantabria (IFCA)
> Santander, Spain
> Tel: +34942200969
> PGP PUBLIC KEY:
> http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC
> ############################################################################
> Bertrand Russell:*"El problema con el mundo es que los estúpidos están
> seguros de todo y los inteligentes están **llenos de dudas*"

-- 
Best Regards
Dai Xiang
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux