Re: ceph-deploy issues rhel6

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 18 Jun 2013, Derek Yarnell wrote:
> Hi,
> 
> So the first error below is that /var/run/ceph isn't created when
> installing the ceph RPM(s).  This is becasuse of line 440 in
> ceph.spec.in using the %ghost directive[1] for the file install.  My
> reading of the behavior will mean that the file or directory in this
> case will be included in the manifest but not installed.  I think this
> makes sense for specific log/state files but not the directory itself.
> I see two options
> 
> 1) Remove the %ghost directive and allow RPM to install the directory.
> Potentially leaving orphaned pid/state files after the package is removed.
> 
> 2) Or the directory needs to be created in the %post section.  If it is
> created in the %post section and the %ghost directive stays then when
> the RPM is uninstalled any stale state pids/etc. will be automatically
> deleted and not left around.

3) create /var/run/ceph on daemon startup.  This is needed anyway for 
debian, where /var/run/ceph isn't supposed to be installed by the package 
at all (and isn't).  It's probably true for other distros too... I think 
the goal is for this to be a tmpfs (and to also move to /run or 
something).  The init scripts mkdir -p /var/run/ceph, but sometimes the 
daemons are run directly.

In any case, on my RHEL 6.3 box, I can 'rm -rf /var/run/ceph' and 'service 
ceph start mon' without problems.  What happens for you when you try this 
locally?

sage


 > 
> I personally would prefer the first option since that is the way most of
> the packages I have run into behave.  This %ghost directive was new to
> me and I have written a large number of locally managed RPM packages.
> 
> However there is still an error when this is fixed.  This may be coming
> from the sysvinit 'service ceph start `hostname -s`' call at the end of
> mon_create.  I can start the service by hand on all my monitors and then
> get further to where gatherkeys works and the ceph status seems to think
> the monitors is up.  I am still looking to see how to track this
> exception but again seems very hard to with pushy.
> 
> $ ../ceph-deploy/ceph-deploy mon create ceph00
> Traceback (most recent call last):
>   File "../ceph-deploy/ceph-deploy", line 8, in <module>
>     load_entry_point('ceph-deploy==1.1', 'console_scripts', 'ceph-deploy')()
>   File "/etc/ceph/ceph-deploy/ceph_deploy/cli.py", line 112, in main
>     return args.func(args)
>   File "/etc/ceph/ceph-deploy/ceph_deploy/mon.py", line 236, in mon
>     mon_create(args)
>   File "/etc/ceph/ceph-deploy/ceph_deploy/mon.py", line 140, in mon_create
>     init=init,
>   File
> "/etc/ceph/ceph-deploy/virtualenv/lib/python2.6/site-packages/pushy-0.5.1-py2.6.egg/pushy/protocol/proxy.py",
> line 255, in <lambda>
>     (conn.operator(type_, self, args, kwargs))
>   File
> "/etc/ceph/ceph-deploy/virtualenv/lib/python2.6/site-packages/pushy-0.5.1-py2.6.egg/pushy/protocol/connection.py",
> line 66, in operator
>     return self.send_request(type_, (object, args, kwargs))
>   File
> "/etc/ceph/ceph-deploy/virtualenv/lib/python2.6/site-packages/pushy-0.5.1-py2.6.egg/pushy/protocol/baseconnection.py",
> line 323, in send_request
>     return self.__handle(m)
>   File
> "/etc/ceph/ceph-deploy/virtualenv/lib/python2.6/site-packages/pushy-0.5.1-py2.6.egg/pushy/protocol/baseconnection.py",
> line 639, in __handle
>     raise e
> ceph-mon: mon.noname-a 192.168.7.222:6789/0 is local, renaming to mon.ceph00
> ceph-mon: set fsid to aee447eb-af2b-406c-b4f5-538a4cc84b3c
> ceph-mon: created monfs at /var/lib/ceph/mon/ceph-ceph00 for mon.ceph00
> 2013-06-18 14:54:24.265954 7f63ee659780 -1 asok(0x12d8000)
> AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed
> to bind the UNIX domain socket to '/var/run/ceph/ceph-mon.ceph00.asok':
> (2) No such file or directory
> pushy.protocol.proxy.ExceptionProxy: [Errno 2] No such file or directory
> 
> 
> # ceph status
>    health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
>    monmap e2: 3 mons at
> {ceph00=192.168.7.222:6789/0,ceph01=192.168.7.223:6789/0,ceph02=192.168.7.224:6789/0},
> election epoch 6, quorum 0,1,2 ceph00,ceph01,ceph02
>    osdmap e1: 0 osds: 0 up, 0 in
>     pgmap v2: 192 pgs: 192 creating; 0 bytes data, 0 KB used, 0 KB / 0
> KB avail
>    mdsmap e1: 0/0/1 up
> 
> 
> [1] -
> http://www.rpm.org/max-rpm-snapshot/s1-rpm-inside-files-list-directives.html
> 
> -- 
> ---
> Derek T. Yarnell
> University of Maryland
> Institute for Advanced Computer Studies
> 
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux