On Wed, May 30, 2012 at 10:35 AM, Guido Winkelmann <guido-ceph@xxxxxxxxxxxxxxxxx> wrote: > I just saw a kernel crash on one of my machines. It had the cephfs from the > ceph cluster mounted using the in-kernel client: ... > [522247.751290] [<ffffffff815ea46a>] bad_area_nosemaphore+0x13/0x15 > [522247.751397] [<ffffffff815f7b76>] do_page_fault+0x416/0x4f0 > [522247.751503] [<ffffffff814ce5dd>] ? sock_recvmsg+0x11d/0x140 > [522247.751611] [<ffffffff812c0ea6>] ? cpumask_next_and+0x36/0x50 > [522247.751718] [<ffffffff815f4475>] page_fault+0x25/0x30 > [522247.751828] [<ffffffffa03ccba4>] ? ceph_x_destroy_authorizer+0x14/0x40 > [libceph] > [522247.751995] [<ffffffffa040f9be>] get_authorizer+0x6e/0x140 [ceph] > [522247.752104] [<ffffffff814ce646>] ? kernel_recvmsg+0x46/0x60 Hi. I can't find any other reports of kernel crashes related to ceph_x_destroy_authorizer either. If this keeps happening, please file a bug report at http://tracker.newdream.net/ with details on the circumstance in which it triggers, and we'll get back to it once we re-focus on the Ceph Distributed File System. Right now, we are focusing our efforts in the RADOS, RBD and radosgw functionality, in an effort to stabilize and optimize the core object store of the product even more. -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html