Hi all,
I tried upgrading ceph from 0.9.3 to 9.1.0, but ran into some troubles.
I chowned the /var/lib/ceph folder as described in the release notes,
but my journal is on a seperate partition, so I get:
Oct 19 11:58:59 ceph001.cubone.os systemd[1]: Started Ceph object
storage daemon.
Oct 19 11:58:59 ceph001.cubone.os ceph-osd[6806]: starting osd.1 at :/0
osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Oct 19 11:58:59 ceph001.cubone.os ceph-osd[6806]: 2015-10-19
11:58:59.530204 7f18aeba8900 -1 filestore(/var/lib/ceph/osd/ceph-1)
mount failed to open journal /var/lib/ceph/osd/ceph-1/journal: (13)
Permission den
Oct 19 11:58:59 ceph001.cubone.os ceph-osd[6806]: 2015-10-19
11:58:59.540355 7f18aeba8900 -1 osd.1 0 OSD:init: unable to mount object
store
Oct 19 11:58:59 ceph001.cubone.os ceph-osd[6806]: 2015-10-19
11:58:59.540370 7f18aeba8900 -1 ** ERROR: osd init failed: (13)
Permission denied
Oct 19 11:58:59 ceph001.cubone.os systemd[1]: ceph-osd@1.service: main
process exited, code=exited, status=1/FAILURE
Is this a known issue?
I tried chowning the journal partition, without luck, then instead I get
this:
Oct 19 12:10:34 ceph001.cubone.os ceph-osd[7763]: in thread 7fbb986fe900
Oct 19 12:10:34 ceph001.cubone.os ceph-osd[7763]: ceph version 9.1.0
(3be81ae6cf17fcf689cd6f187c4615249fea4f61)
Oct 19 12:10:34 ceph001.cubone.os ceph-osd[7763]: 1: (()+0x7e1f22)
[0x7fbb98ef1f22]
Oct 19 12:10:34 ceph001.cubone.os ceph-osd[7763]: 2: (()+0xf130)
[0x7fbb97067130]
Oct 19 12:10:34 ceph001.cubone.os ceph-osd[7763]: 3: (gsignal()+0x37)
[0x7fbb958255d7]
Oct 19 12:10:34 ceph001.cubone.os ceph-osd[7763]: 4: (abort()+0x148)
[0x7fbb95826cc8]
Oct 19 12:10:34 ceph001.cubone.os ceph-osd[7763]: 5:
(__gnu_cxx::__verbose_terminate_handler()+0x165) [0x7fbb961389b5]
Oct 19 12:10:34 ceph001.cubone.os ceph-osd[7763]: 6: (()+0x5e926)
[0x7fbb96136926]
Oct 19 12:10:34 ceph001.cubone.os ceph-osd[7763]: 7: (()+0x5e953)
[0x7fbb96136953]
Oct 19 12:10:34 ceph001.cubone.os ceph-osd[7763]: 8: (()+0x5eb73)
[0x7fbb96136b73]
Oct 19 12:10:34 ceph001.cubone.os ceph-osd[7763]: 9:
(ceph::__ceph_assert_fail(char const*, char const*, int, char
const*)+0x27a) [0x7fbb98fe766a]
Oct 19 12:10:34 ceph001.cubone.os ceph-osd[7763]: 10:
(OSDService::get_map(unsigned int)+0x3d) [0x7fbb98a97e2d]
Oct 19 12:10:34 ceph001.cubone.os ceph-osd[7763]: 11:
(OSD::init()+0xb0b) [0x7fbb98a4bf7b]
Oct 19 12:10:34 ceph001.cubone.os ceph-osd[7763]: 12: (main()+0x2998)
[0x7fbb989cf3b8]
Oct 19 12:10:34 ceph001.cubone.os ceph-osd[7763]: 13:
(__libc_start_main()+0xf5) [0x7fbb95811af5]
Oct 19 12:10:34 ceph001.cubone.os ceph-osd[7763]: 14: (()+0x2efb49)
[0x7fbb989ffb49]
Oct 19 12:10:34 ceph001.cubone.os ceph-osd[7763]: NOTE: a copy of the
executable, or `objdump -rdS <executable>` is needed to interpret this.
Oct 19 12:10:34 ceph001.cubone.os ceph-osd[7763]: 0> 2015-10-19
12:10:34.710385 7fbb986fe900 -1 *** Caught signal (Aborted) **
So the OSDs do not start..
By the way, is there an easy way to only restart osds, not the mons or
other daemons as with ceph.target?
Could there be seperate targets for the osd/mon/.. types?
Thanks!
Kenneth
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com