This appears to be a buggy libtcmalloc. Ceph hasn't gotten to main() yet from the looks of things.. tcmalloc is still initializing. Hopefully fedora has a newer versin of the package? sage On Thu, 13 Nov 2014, Harm Weites wrote: > Hi Sage, > > Here you go: http://paste.openstack.org/show/132936/ > > Harm > > Op 13-11-14 om 00:44 schreef Sage Weil: > > On Wed, 12 Nov 2014, Harm Weites wrote: > >> Hi, > >> > >> When trying to add a new OSD to my cluster the ceph-osd process hangs: > >> > >> # ceph-osd -i $id --mkfs --mkkey > >> <nothing> > >> > >> At this point I have to explicitly kill -9 the ceph-osd since it doesn't > >> respond to anything. It also didn't adhere to my foreground debug log > >> request; the logs are empty. Stracing the ceph-osd [2] shows its very > >> busy with this: > >> > >> nanosleep({0, 2000001}, NULL) = 0 > >> gettimeofday({1415741192, 862216}, NULL) = 0 > >> nanosleep({0, 2000001}, NULL) = 0 > >> gettimeofday({1415741192, 864563}, NULL) = 0 > > Can you gdb attach to the ceph-osd process while it is in this state and > > see what 'bt' says? > > > > sage > > > > > >> I've rebuilt python to undo a threading regression [2], though that's > >> unrelated to this issue. It did fix ceph not returning properly after > >> commands like 'ceph osd tree' though, so it is usefull. > >> > >> This machine is Fedora 21 on ARM with ceph-0.80.7-1.fc21.armv7hl. The > >> mon/mds/osd are all x86, CentOS 7. Could this be a configuration issue > >> on my end or is something just broken on my platform? > >> > >> # lscpu > >> Architecture: armv7l > >> Byte Order: Little Endian > >> CPU(s): 2 > >> On-line CPU(s) list: 0,1 > >> Thread(s) per core: 1 > >> Core(s) per socket: 2 > >> Socket(s): 1 > >> Model name: ARMv7 Processor rev 4 (v7l) > >> > >> [1] http://paste.openstack.org/show/132555/ > >> [2] http://bugs.python.org/issue21963 > >> > >> Regards, > >> Harm > >> _______________________________________________ > >> ceph-users mailing list > >> ceph-users@xxxxxxxxxxxxxx > >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > >> > >> > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com