Hi We are experiencing the following - Hammer 0.94.2 - Ubuntu 14.04.1 - Kernel 3.16.0-37-generic - 40TB NTFS disk mounted through RBD First 50GB goes fine, but then this happens Jul 5 16:56:01 cephclient kernel: [110581.046141] kworker/u65:1 D ffff88040fc530c0 0 295 2 0x00000000 Jul 5 16:56:01 cephclient kernel: [110581.046149] Workqueue: writeback bdi_writeback_workfn (flush-251:16) Jul 5 16:56:01 cephclient kernel: [110581.046151] ffff8804064c77f0 0000000000000046 ffff8804064c8000 ffff8804064c7fd8 Jul 5 16:56:01 cephclient kernel: [110581.046154] 00000000000130c0 00000000000130c0 ffff8803c2c11e90 ffff88040fc539c0 Jul 5 16:56:01 cephclient kernel: [110581.046156] ffff8803d0108000 ffff880407764068 ffff8803d0108030 ffff8803d0108000 Jul 5 16:56:01 cephclient kernel: [110581.046158] Call Trace: Jul 5 16:56:01 cephclient kernel: [110581.046164] [<ffffffff817694bd>] io_schedule+0x9d/0x130 Jul 5 16:56:01 cephclient kernel: [110581.046168] [<ffffffff813578b5>] get_request+0x1a5/0x790 Jul 5 16:56:01 cephclient kernel: [110581.046172] [<ffffffff810b4e50>] ? prepare_to_wait_event+0x100/0x100 Jul 5 16:56:01 cephclient kernel: [110581.046175] [<ffffffff81359af7>] blk_queue_bio+0xb7/0x380 Jul 5 16:56:01 cephclient kernel: [110581.046177] [<ffffffff81354ff0>] generic_make_request+0xc0/0x110 Jul 5 16:56:01 cephclient kernel: [110581.046179] [<ffffffff813550b8>] submit_bio+0x78/0x160 Jul 5 16:56:01 cephclient kernel: [110581.046181] [<ffffffff813503d6>] ? bio_alloc_bioset+0x1a6/0x2b0 Jul 5 16:56:01 cephclient kernel: [110581.046185] [<ffffffff8116bd69>] ? account_page_writeback+0x29/0x30 Jul 5 16:56:01 cephclient kernel: [110581.046188] [<ffffffff8120628b>] _submit_bh+0x13b/0x210 Jul 5 16:56:01 cephclient kernel: [110581.046190] [<ffffffff81208e85>] __block_write_full_page.constprop.38+0x125/0x360 Jul 5 16:56:01 cephclient kernel: [110581.046192] [<ffffffff81209850>] ? I_BDEV+0x10/0x10 Jul 5 16:56:01 cephclient kernel: [110581.046194] [<ffffffff81209850>] ? I_BDEV+0x10/0x10 Jul 5 16:56:01 cephclient kernel: [110581.046196] [<ffffffff81209186>] block_write_full_page+0xc6/0xd0 Jul 5 16:56:01 cephclient kernel: [110581.046198] [<ffffffff8120a128>] blkdev_writepage+0x18/0x20 Jul 5 16:56:01 cephclient kernel: [110581.046200] [<ffffffff8116bad3>] __writepage+0x13/0x50 Jul 5 16:56:01 cephclient kernel: [110581.046202] [<ffffffff8116c495>] write_cache_pages+0x235/0x480 Jul 5 16:56:01 cephclient kernel: [110581.046204] [<ffffffff8116bac0>] ? global_dirtyable_memory+0x50/0x50 Jul 5 16:56:01 cephclient kernel: [110581.046207] [<ffffffff8116c723>] generic_writepages+0x43/0x60 Jul 5 16:56:01 cephclient kernel: [110581.046209] [<ffffffff81768d3f>] ? __schedule+0x35f/0x7a0 Jul 5 16:56:01 cephclient kernel: [110581.046211] [<ffffffff8116d87e>] do_writepages+0x1e/0x40 Jul 5 16:56:01 cephclient kernel: [110581.046213] [<ffffffff811fc520>] __writeback_single_inode+0x40/0x220 Jul 5 16:56:01 cephclient kernel: [110581.046215] [<ffffffff811fd017>] writeback_sb_inodes+0x247/0x3e0 Jul 5 16:56:01 cephclient kernel: [110581.046217] [<ffffffff811fd24f>] __writeback_inodes_wb+0x9f/0xd0 Jul 5 16:56:01 cephclient kernel: [110581.046219] [<ffffffff811fd4c3>] wb_writeback+0x243/0x2c0 Jul 5 16:56:01 cephclient kernel: [110581.046222] [<ffffffff811ffc52>] bdi_writeback_workfn+0x2a2/0x430 Jul 5 16:56:01 cephclient kernel: [110581.046225] [<ffffffff8108a402>] process_one_work+0x182/0x450 Jul 5 16:56:01 cephclient kernel: [110581.046227] [<ffffffff8108ab71>] worker_thread+0x121/0x570 Jul 5 16:56:01 cephclient kernel: [110581.046229] [<ffffffff8108aa50>] ? rescuer_thread+0x380/0x380 Jul 5 16:56:01 cephclient kernel: [110581.046231] [<ffffffff81091412>] kthread+0xd2/0xf0 Jul 5 16:56:01 cephclient kernel: [110581.046234] [<ffffffff81091340>] ? kthread_create_on_node+0x1c0/0x1c0 Jul 5 16:56:01 cephclient kernel: [110581.046237] [<ffffffff8176d158>] ret_from_fork+0x58/0x90 Jul 5 16:56:01 cephclient kernel: [110581.046239] [<ffffffff81091340>] ? kthread_create_on_node+0x1c0/0x1c0 Would updating kernels fix this, and if yes, what would be the version? Thanks Br, T |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com