Re: Ceph Command Prepending "None" to output on one node (only)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've (re)confirmed that all nodes are the same build.

# ceph --version                                                                                                                                                                                                                         
ceph version 0.72.2 (a913ded2ff138aefb8cb84d347d72164099cfd60) 

ubuntu package version 0.72.2-1precise 

I was discussing this with my engineers this morning and a couple of them vaguely recalled that we had run into this on an earlier version of ceph when we were testing it, but no one could recall the circumstances or resolution. In fact, they thought that I had fixed it. :)

Since this is my home sandbox cluster I can easily rebuild that node if need be, but I wanted to see if anyone could point me toward a better solution so I don't run into this again. 

thanks. 





On Mon, Jan 6, 2014 at 10:07 AM, Gregory Farnum <greg@xxxxxxxxxxx> wrote:
I have a vague memory of this being something that happened in an
outdated version of the ceph tool. Are you running an older binary on
the node in question?
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Sat, Jan 4, 2014 at 4:34 PM, Zeb Palmer <zeb@xxxxxxxxxxxxx> wrote:
> I have a small ceph 0.72.2 cluster built with ceph-deploy and running on
> ubuntu 12.04, this cluster is used as primary storage for my home openstack
> sandbox.
>
> I'm running into an issue I haven't seen before and have had a heck of a
> time searching for similar issues as "None" doesn't exactly make a good
> keyword.
>
>
> On one node, when I run any ceph command that interacts with the cluster, I
> get the appropriate output, but "None" is prepended to it.
>
> root@os2:/etc/ceph# ceph health
> None
> HEALTH_OK
>
>
> root@os2:/etc/ceph# ceph
> None
> ceph>
>
>
> Again, this only happens on one of the four ceph nodes. I've verified conf
> files, keys, perms, versions, etc. match on all nodes, no connectivity
> issues, etc. In fact the ceph cluster is still healthy and working great
> with only one exception. Cinder-Volume also runs on this node and since
> "None" is also getting prepended to json formatted output, Cinder-Volume
> errors out in _get_mon_addrs() when json decoder chokes on the response from
> ceph.  (I'll probably throw a quick pre-decode band-aid on that method to
> get Cinder back online until I can correct this)
>
> here's my config sans radosgw... although it hasn't changed recently.
>
> [global]
> fsid = 02a4abf4-3659-4525-bfe8-f1f5ea024030
> mon_initial_members = fs1,os1,cortex,os2
> mon_host = 10.10.3.8,10.10.3.10,10.10.3.7,10.10.3.20
> auth_supported = cephx
> osd_journal_size = 1024
> filestore_xattr_use_omap = true
> public_network = 10.10.3.0/24
> cluster_network = 10.10.150.0/24
>
>
> I've tried everything I can think of, hoping someone here can point out what
> I'm missing.
>
> Thanks
> zeb
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux