ceph mount not working anymore

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Is Ceph an acronym?  If yes, what?

John Tuite
Corporate Global Infrastructure Services Pittsburgh
Manager, Information Technology
Global Hosting Services
Thermo Fisher Scientific
600 Business Center Drive
Pittsburgh, Pennsylvania 15205
Office 412-490-7292
Mobile 412-897-3401
Fax 412-490-9401
john.tuite at thermofisher.com<mailto:trey.haydon at thermofisher.com>
http://www.thermofisher.com<http://www.thermofisher.com/>

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Joshua McClintock
Sent: Friday, July 11, 2014 1:44 PM
To: Alfredo Deza
Cc: ceph-users at lists.ceph.com
Subject: Re: ceph mount not working anymore

Hello Alfredo, isn't this what the 'ceph-release-1-0.el6.noarch' package is for in my rpm -qa list?  Here are the yum repo files I have in /etc/yum.repos.d.  I don't see any priorities in the ceph one which is where libcephfs1 comes from I think.  I tried to 'yum reinstall ceph-release', but the file still doesn't include any priority lines.


ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://ceph.com/rpm-firefly/el6/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://ceph.com/rpm-firefly/el6/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

[ceph-source]
name=Ceph source packages
baseurl=http://ceph.com/rpm-firefly/el6/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

Here is the one for apache
[apache2-ceph-noarch]
name=Apache noarch packages for Ceph
baseurl=http://gitbuilder.ceph.com/apache2-rpm-centos6-x86_64-basic/ref/master
enabled=1
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/autobuild.asc

[apache2-ceph-source]
name=Apache source packages for Ceph
baseurl=http://gitbuilder.ceph.com/apache2-rpm-centos6-x86_64-basic/ref/master
enabled=0
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/autobuild.asc

Here's the one for fastcgi
[fastcgi-ceph-basearch]
name=FastCGI basearch packages for Ceph
baseurl=http://gitbuilder.ceph.com/mod_fastcgi-rpm-centos6-x86_64-basic/ref/master
enabled=1
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/autobuild.asc

[fastcgi-ceph-noarch]
name=FastCGI noarch packages for Ceph
baseurl=http://gitbuilder.ceph.com/mod_fastcgi-rpm-centos6-x86_64-basic/ref/master
enabled=1
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/autobuild.asc

[fastcgi-ceph-source]
name=FastCGI source packages for Ceph
baseurl=http://gitbuilder.ceph.com/mod_fastcgi-rpm-centos6-x86_64-basic/ref/master
enabled=0
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/autobuild.asc

On Fri, Jul 11, 2014 at 5:44 AM, Alfredo Deza <alfredo.deza at inktank.com<mailto:alfredo.deza at inktank.com>> wrote:
Joshua it looks like you got Ceph from EPEL (that version has the '-2'
slapped on it). And it is why you are seeing this
for ceph:

ceph-0.80.1-2.el6.x86_64

And this for others:

libcephfs1-0.80.1-0.el6.x86_64

Make sure that you do get Ceph from our repos. Newer versions of
ceph-deploy fix this by installing the priority plugin
and making sure the ceph.repo file will have higher priority than EPEL.



On Fri, Jul 11, 2014 at 1:31 AM, Joshua McClintock
<joshua at gravityedge.com<mailto:joshua at gravityedge.com>> wrote:
> [root at chefwks01 ~]# ceph --cluster us-west01 osd crush dump
>
> { "devices": [
>
>         { "id": 0,
>
>           "name": "osd.0"},
>
>         { "id": 1,
>
>           "name": "osd.1"},
>
>         { "id": 2,
>
>           "name": "osd.2"},
>
>         { "id": 3,
>
>           "name": "osd.3"},
>
>         { "id": 4,
>
>           "name": "osd.4"}],
>
>   "types": [
>
>         { "type_id": 0,
>
>           "name": "osd"},
>
>         { "type_id": 1,
>
>           "name": "host"},
>
>         { "type_id": 2,
>
>           "name": "chassis"},
>
>         { "type_id": 3,
>
>           "name": "rack"},
>
>         { "type_id": 4,
>
>           "name": "row"},
>
>         { "type_id": 5,
>
>           "name": "pdu"},
>
>         { "type_id": 6,
>
>           "name": "pod"},
>
>         { "type_id": 7,
>
>           "name": "room"},
>
>         { "type_id": 8,
>
>           "name": "datacenter"},
>
>         { "type_id": 9,
>
>           "name": "region"},
>
>         { "type_id": 10,
>
>           "name": "root"}],
>
>   "buckets": [
>
>         { "id": -1,
>
>           "name": "default",
>
>           "type_id": 10,
>
>           "type_name": "root",
>
>           "weight": 147455,
>
>           "alg": "straw",
>
>           "hash": "rjenkins1",
>
>           "items": [
>
>                 { "id": -2,
>
>                   "weight": 29491,
>
>                   "pos": 0},
>
>                 { "id": -3,
>
>                   "weight": 29491,
>
>                   "pos": 1},
>
>                 { "id": -4,
>
>                   "weight": 29491,
>
>                   "pos": 2},
>
>                 { "id": -5,
>
>                   "weight": 29491,
>
>                   "pos": 3},
>
>                 { "id": -6,
>
>                   "weight": 29491,
>
>                   "pos": 4}]},
>
>         { "id": -2,
>
>           "name": "ceph-node20",
>
>           "type_id": 1,
>
>           "type_name": "host",
>
>           "weight": 29491,
>
>           "alg": "straw",
>
>           "hash": "rjenkins1",
>
>           "items": [
>
>                 { "id": 0,
>
>                   "weight": 29491,
>
>                   "pos": 0}]},
>
>         { "id": -3,
>
>           "name": "ceph-node22",
>
>           "type_id": 1,
>
>           "type_name": "host",
>
>           "weight": 29491,
>
>           "alg": "straw",
>
>           "hash": "rjenkins1",
>
>           "items": [
>
>                 { "id": 2,
>
>                   "weight": 29491,
>
>                   "pos": 0}]},
>
>         { "id": -4,
>
>           "name": "ceph-node24",
>
>           "type_id": 1,
>
>           "type_name": "host",
>
>           "weight": 29491,
>
>           "alg": "straw",
>
>           "hash": "rjenkins1",
>
>           "items": [
>
>                 { "id": 4,
>
>                   "weight": 29491,
>
>                   "pos": 0}]},
>
>         { "id": -5,
>
>           "name": "ceph-node21",
>
>           "type_id": 1,
>
>           "type_name": "host",
>
>           "weight": 29491,
>
>           "alg": "straw",
>
>           "hash": "rjenkins1",
>
>           "items": [
>
>                 { "id": 1,
>
>                   "weight": 29491,
>
>                   "pos": 0}]},
>
>         { "id": -6,
>
>           "name": "ceph-node23",
>
>           "type_id": 1,
>
>           "type_name": "host",
>
>           "weight": 29491,
>
>           "alg": "straw",
>
>           "hash": "rjenkins1",
>
>           "items": [
>
>                 { "id": 3,
>
>                   "weight": 29491,
>
>                   "pos": 0}]}],
>
>   "rules": [
>
>         { "rule_id": 0,
>
>           "rule_name": "replicated_ruleset",
>
>           "ruleset": 0,
>
>           "type": 1,
>
>           "min_size": 1,
>
>           "max_size": 10,
>
>           "steps": [
>
>                 { "op": "take",
>
>                   "item": -1,
>
>                   "item_name": "default"},
>
>                 { "op": "chooseleaf_firstn",
>
>                   "num": 0,
>
>                   "type": "host"},
>
>                 { "op": "emit"}]},
>
>         { "rule_id": 1,
>
>           "rule_name": "erasure-code",
>
>           "ruleset": 1,
>
>           "type": 3,
>
>           "min_size": 3,
>
>           "max_size": 20,
>
>           "steps": [
>
>                 { "op": "set_chooseleaf_tries",
>
>                   "num": 5},
>
>                 { "op": "take",
>
>                   "item": -1,
>
>                   "item_name": "default"},
>
>                 { "op": "chooseleaf_indep",
>
>                   "num": 0,
>
>                   "type": "host"},
>
>                 { "op": "emit"}]},
>
>         { "rule_id": 2,
>
>           "rule_name": "ecpool",
>
>           "ruleset": 2,
>
>           "type": 3,
>
>           "min_size": 3,
>
>           "max_size": 20,
>
>           "steps": [
>
>                 { "op": "set_chooseleaf_tries",
>
>                   "num": 5},
>
>                 { "op": "take",
>
>                   "item": -1,
>
>                   "item_name": "default"},
>
>                 { "op": "choose_indep",
>
>                   "num": 0,
>
>                   "type": "osd"},
>
>                 { "op": "emit"}]}],
>
>   "tunables": { "choose_local_tries": 0,
>
>       "choose_local_fallback_tries": 0,
>
>       "choose_total_tries": 50,
>
>       "chooseleaf_descend_once": 1,
>
>       "profile": "bobtail",
>
>       "optimal_tunables": 0,
>
>       "legacy_tunables": 0,
>
>       "require_feature_tunables": 1,
>
>       "require_feature_tunables2": 1}}
>
>
>
> On Thu, Jul 10, 2014 at 8:16 PM, Sage Weil <sweil at redhat.com<mailto:sweil at redhat.com>> wrote:
>>
>> That is CEPH_FEATURE_CRUSH_V2.  Can you attach teh output of
>>
>>  ceph osd crush dump
>>
>> Thanks!
>> sage
>>
>>
>> On Thu, 10 Jul 2014, Joshua McClintock wrote:
>>
>> > Yes, I change some of the mount options on my osds (xfs mount options),
>> > but
>> > I think this may be the answer from dmesg, sorta looks like a version
>> > mismatch:
>> >
>> > libceph: loaded (mon/osd proto 15/24)
>> >
>> > ceph: loaded (mds proto 32)
>> >
>> > libceph: mon0 192.168.0.14:6789<http://192.168.0.14:6789> feature set mismatch, my 4a042aca <
>> > server's
>> > 104a042aca, missing 1000000000
>> >
>> > libceph: mon0 192.168.0.14:6789<http://192.168.0.14:6789> socket error on read
>> >
>> > libceph: mon2 192.168.0.16:6789<http://192.168.0.16:6789> feature set mismatch, my 4a042aca <
>> > server's
>> > 104a042aca, missing 1000000000
>> >
>> > libceph: mon2 192.168.0.16:6789<http://192.168.0.16:6789> socket error on read
>> >
>> > libceph: mon1 192.168.0.15:6789<http://192.168.0.15:6789> feature set mismatch, my 4a042aca <
>> > server's
>> > 104a042aca, missing 1000000000
>> >
>> > libceph: mon1 192.168.0.15:6789<http://192.168.0.15:6789> socket error on read
>> >
>> > libceph: mon0 192.168.0.14:6789<http://192.168.0.14:6789> feature set mismatch, my 4a042aca <
>> > server's
>> > 104a042aca, missing 1000000000
>> >
>> > libceph: mon0 192.168.0.14:6789<http://192.168.0.14:6789> socket error on read
>> >
>> > libceph: mon2 192.168.0.16:6789<http://192.168.0.16:6789> feature set mismatch, my 4a042aca <
>> > server's
>> > 104a042aca, missing 1000000000
>> >
>> > libceph: mon2 192.168.0.16:6789<http://192.168.0.16:6789> socket error on read
>> >
>> > libceph: mon1 192.168.0.15:6789<http://192.168.0.15:6789> feature set mismatch, my 4a042aca <
>> > server's
>> > 104a042aca, missing 1000000000
>> >
>> > libceph: mon1 192.168.0.15:6789<http://192.168.0.15:6789> socket error on read
>> >
>> >
>> > I maybe I didn't update as well as I thought it did.  I did hit every
>> > mon,
>> > but I remember I couldn't upgrade to the new 'ceph' package because it
>> > conflicted with 'python-ceph', so I uninstalled it (python-ceph), and
>> > then
>> > upgraded to .80.1-2.   Maybe there's a subcomponent I missed?
>> >
>> >
>> > Here's rpm -qa from the client:
>> >
>> >
>> > [root at chefwks01 ~]# rpm -qa|grep ceph
>> >
>> > ceph-deploy-1.5.2-0.noarch
>> >
>> > ceph-release-1-0.el6.noarch
>> >
>> > ceph-0.80.1-2.el6.x86_64
>> >
>> > libcephfs1-0.80.1-0.el6.x86_64
>> >
>> >
>> > Here's rpm -qa from the mons:
>> >
>> >
>> > [root at ceph-mon01 ~]# rpm -qa|grep ceph
>> >
>> > ceph-0.80.1-2.el6.x86_64
>> >
>> > ceph-release-1-0.el6.noarch
>> >
>> > libcephfs1-0.80.1-0.el6.x86_64
>> >
>> > [root at ceph-mon01 ~]#
>> >
>> >
>> > [root at ceph-mon02 ~]# rpm -qa|grep ceph
>> >
>> > libcephfs1-0.80.1-0.el6.x86_64
>> >
>> > ceph-0.80.1-2.el6.x86_64
>> >
>> > ceph-release-1-0.el6.noarch
>> >
>> > [root at ceph-mon02 ~]#
>> >
>> >
>> > [root at ceph-mon03 ~]# rpm -qa|grep ceph
>> >
>> > libcephfs1-0.80.1-0.el6.x86_64
>> >
>> > ceph-0.80.1-2.el6.x86_64
>> >
>> > ceph-release-1-0.el6.noarch
>> >
>> > [root at ceph-mon03 ~]#
>> >
>> >
>> > Joshua
>> >
>> >
>> >
>> > On Thu, Jul 10, 2014 at 6:04 PM, Sage Weil <sweil at redhat.com<mailto:sweil at redhat.com>> wrote:
>> >       Have you made any other changes after the upgrade?  (Like
>> >       adjusting
>> >       tunables, or creating EC pools?)
>> >
>> >       See if there is anything in 'dmesg' output.
>> >
>> >       sage
>> >
>> >       On Thu, 10 Jul 2014, Joshua McClintock wrote:
>> >
>> >       > I upgraded my cluster to .80.1-2 (CentOS).  My mount command
>> >       just freezes
>> >       > and outputs an error:
>> >       >
>> >       > mount.ceph 192.168.0.14,192.168.0.15,192.168.0.16:/ /us-west01
>> >       -o
>> >       > name=chefwks01,secret=`ceph-authtool -p -n client.admin
>> >       > /etc/ceph/us-west01.client.admin.keyring`
>> >       >
>> >       > mount error 5 = Input/output error
>> >       >
>> >       >
>> >       > Here's the output from 'ceph -s'
>> >       >
>> >       >
>> >       >     cluster xxxxxxxxxxxxxxxxxxxxxx
>> >       >
>> >       >      health HEALTH_OK
>> >       >
>> >       >      monmap e1: 3
>> > monsat{ceph-mon01=192.168.0.14:6789/0,ceph-mon02=192.168.0.15:6789/0,ceph-mon03<http://192.168.0.14:6789/0,ceph-mon02=192.168.0.15:6789/0,ceph-mon03>
>> >       =1
>> >       > 92.168.0.16:6789/0<http://92.168.0.16:6789/0>}, election epoch 88, quorum 0,1,2
>> >       > ceph-mon01,ceph-mon02,ceph-mon03
>> >       >
>> >       >      mdsmap e26: 1/1/1 up {0=0=up:active}
>> >       >
>> >       >      osdmap e1371: 5 osds: 5 up, 5 in
>> >       >
>> >       >       pgmap v49431: 192 pgs, 3 pools, 135 GB data, 34733
>> >       objects
>> >       >
>> >       >             406 GB used, 1874 GB / 2281 GB avail
>> >       >
>> >       >                  192 active+clean
>> >       >
>> >       >
>> >       > I can see some packets being exchanged between the client and
>> >       the mon, but
>> >       > it's a pretty short exchange.
>> >       >
>> >       > Any ideas where to look next?
>> >       >
>> >       > Joshua
>> >       >
>> >       >
>> >       >
>> >
>> >
>> >
>> >
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com<mailto:ceph-users at lists.ceph.com>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140711/626de4b7/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux