On 6-5-2016 06:49, kefu chai wrote:
On Thu, May 5, 2016 at 4:44 AM, Willem Jan Withagen <wjw@xxxxxxxxxxx> wrote:
Hi
Setup:
Fresh VM with CentOS7
standard ceph/ceph clone
building with run-make-check.sh
So the dependancies are installed by install-deps.sh
I have several tests that fail with:
run_osd: ceph-disk --statedir=testdir/osd-crush
--sysconfdir=testdir/osd-crush --prepend-to-path= prepare
testdir/osd-crush/0
Traceback (most recent call last):
File "/tmp/ceph-disk-virtualenv/bin/ceph-disk", line 5, in <module>
from pkg_resources import load_entry_point
File
"/tmp/ceph-disk-virtualenv/lib/python2.7/site-packages/pkg_resources.py",
line 3007, in <module>
working_set.require(__requires__)
File
"/tmp/ceph-disk-virtualenv/lib/python2.7/site-packages/pkg_resources.py",
line 728, in require
needed = self.resolve(parse_requirements(requirements))
File
"/tmp/ceph-disk-virtualenv/lib/python2.7/site-packages/pkg_resources.py",
line 626, in resolve
raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: ceph-disk==1.0.0
Looks like ceph-disk is not in the fresh install in tmp...
Ran into this with my development fork, but then also with a regular HEAD
Did I miss something?
Willem, you are not missing anything. the problem you are facing is what
happens in our jenkins sometimes. for example, see
https://jenkins.ceph.com/job/ceph-pull-requests/5222/consoleFull . Loïc tried
to root cause this, and found that it only happens in rhel7.0.
Right,
Adding /usr/local/bin did "fix" this problem for me.
And I think it is because of python being in /usr/local/bin. (FreeBSD)
--WjW
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html