Re: Old vs New pool on same OSDs - Performance Difference

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Somnath,

Sorry for the delay. Release is Hammer, so I can probably drop that setting then.

I have hopefully managed to capture a couple of slow IO's on an idle cluster, does the below look about right to you? I can see there is a delay after this entry " get_object_context: obc NOT found in cache", is that indicative of anything?

2015-06-29 12:35:19.551447 7fd8a7d44700 15 osd.1 26335 enqueue_op 0x6b9a4d00 prio 63 cost 0 latency 0.000184 osd_op(client.2808923.0:145 rb.0.1ba70.238e1f29.00000000ad7a [read 524288~65536] 0.b974752f ack+read+known_if_redirected e26335) v5
2015-06-29 12:35:19.551555 7fd8b1a71700 10 osd.1 26335 dequeue_op 0x6b9a4d00 prio 63 cost 0 latency 0.000291 osd_op(client.2808923.0:145 rb.0.1ba70.238e1f29.00000000ad7a [read 524288~65536] 0.b974752f ack+read+known_if_redirected e26335) v5 pg pg[0.12f( v 26335'1299179 (24993'1296145,26335'1299179] local-les=26276 n=5044 ec=1 les/c 26276/26309 26264/26270/26270) [1,18,40] r=0 lpr=26270 crt=26335'1299176 lcod 26335'1299178 mlcod 26335'1299178 active+clean]
2015-06-29 12:35:19.551936 7fd8b1a71700 20 osd.1 pg_epoch: 26335 pg[0.12f( v 26335'1299179 (24993'1296145,26335'1299179] local-les=26276 n=5044 ec=1 les/c 26276/26309 26264/26270/26270) [1,18,40] r=0 lpr=26270 crt=26335'1299176 lcod 26335'1299178 mlcod 26335'1299178 active+clean] op_has_sufficient_caps pool=0 (rbd ) owner=0 need_read_cap=1 need_write_cap=0 need_class_read_cap=0 need_class_write_cap=0 -> yes
2015-06-29 12:35:19.552422 7fd8b1a71700 10 osd.1 pg_epoch: 26335 pg[0.12f( v 26335'1299179 (24993'1296145,26335'1299179] local-les=26276 n=5044 ec=1 les/c 26276/26309 26264/26270/26270) [1,18,40] r=0 lpr=26270 crt=26335'1299176 lcod 26335'1299178 mlcod 26335'1299178 active+clean] handle_message: 0x6b9a4d00
2015-06-29 12:35:19.552446 7fd8b1a71700 10 osd.1 pg_epoch: 26335 pg[0.12f( v 26335'1299179 (24993'1296145,26335'1299179] local-les=26276 n=5044 ec=1 les/c 26276/26309 26264/26270/26270) [1,18,40] r=0 lpr=26270 crt=26335'1299176 lcod 26335'1299178 mlcod 26335'1299178 active+clean] do_op osd_op(client.2808923.0:145 rb.0.1ba70.238e1f29.00000000ad7a [read 524288~65536] 0.b974752f ack+read+known_if_redirected e26335) v5 may_read -> read-ordered flags ack+read+known_if_redirected
2015-06-29 12:35:19.552728 7fd8b1a71700 10 osd.1 pg_epoch: 26335 pg[0.12f( v 26335'1299179 (24993'1296145,26335'1299179] local-les=26276 n=5044 ec=1 les/c 26276/26309 26264/26270/26270) [1,18,40] r=0 lpr=26270 crt=26335'1299176 lcod 26335'1299178 mlcod 26335'1299178 active+clean] get_object_context: obc NOT found in cache: b974752f/rb.0.1ba70.238e1f29.00000000ad7a/head//0
2015-06-29 12:35:19.616951 7fd8b1a71700 10 osd.1 pg_epoch: 26335 pg[0.12f( v 26335'1299179 (24993'1296145,26335'1299179] local-les=26276 n=5044 ec=1 les/c 26276/26309 26264/26270/26270) [1,18,40] r=0 lpr=26270 crt=26335'1299176 lcod 26335'1299178 mlcod 26335'1299178 active+clean] populate_obc_watchers b974752f/rb.0.1ba70.238e1f29.00000000ad7a/head//0
2015-06-29 12:35:19.617349 7fd8b1a71700 20 osd.1 pg_epoch: 26335 pg[0.12f( v 26335'1299179 (24993'1296145,26335'1299179] local-les=26276 n=5044 ec=1 les/c 26276/26309 26264/26270/26270) [1,18,40] r=0 lpr=26270 crt=26335'1299176 lcod 26335'1299178 mlcod 26335'1299178 active+clean] ReplicatedPG::check_blacklisted_obc_watchers for obc b974752f/rb.0.1ba70.238e1f29.00000000ad7a/head//0
2015-06-29 12:35:19.617387 7fd8b1a71700 10 osd.1 pg_epoch: 26335 pg[0.12f( v 26335'1299179 (24993'1296145,26335'1299179] local-les=26276 n=5044 ec=1 les/c 26276/26309 26264/26270/26270) [1,18,40] r=0 lpr=26270 crt=26335'1299176 lcod 26335'1299178 mlcod 26335'1299178 active+clean] get_object_context: creating obc from disk: 0x6863b180
2015-06-29 12:35:19.617409 7fd8b1a71700 10 osd.1 pg_epoch: 26335 pg[0.12f( v 26335'1299179 (24993'1296145,26335'1299179] local-les=26276 n=5044 ec=1 les/c 26276/26309 26264/26270/26270) [1,18,40] r=0 lpr=26270 crt=26335'1299176 lcod 26335'1299178 mlcod 26335'1299178 active+clean] get_object_context: 0x6863b180 b974752f/rb.0.1ba70.238e1f29.00000000ad7a/head//0 rwstate(none n=0 w=0) oi: b974752f/rb.0.1ba70.238e1f29.00000000ad7a/head//0(2160'454478 osd.1.0:2693729 wrlock_by=unknown.0.0:0 dirty|data_digest|omap_digest s 1048576 uv 43100 dd 88ec7921 od ffffffff) ssc: 0x11a8a3a0 snapset: 0=[]:[]+head
2015-06-29 12:35:19.617516 7fd8b1a71700 10 osd.1 pg_epoch: 26335 pg[0.12f( v 26335'1299179 (24993'1296145,26335'1299179] local-les=26276 n=5044 ec=1 les/c 26276/26309 26264/26270/26270) [1,18,40] r=0 lpr=26270 crt=26335'1299176 lcod 26335'1299178 mlcod 26335'1299178 active+clean] find_object_context b974752f/rb.0.1ba70.238e1f29.00000000ad7a/head//0 @head oi=b974752f/rb.0.1ba70.238e1f29.00000000ad7a/head//0(2160'454478 osd.1.0:2693729 wrlock_by=unknown.0.0:0 dirty|data_digest|omap_digest s 1048576 uv 43100 dd 88ec7921 od ffffffff)
2015-06-29 12:35:19.617821 7fd8b1a71700 10 osd.1 pg_epoch: 26335 pg[0.12f( v 26335'1299179 (24993'1296145,26335'1299179] local-les=26276 n=5044 ec=1 les/c 26276/26309 26264/26270/26270) [1,18,40] r=0 lpr=26270 crt=26335'1299176 lcod 26335'1299178 mlcod 26335'1299178 active+clean] execute_ctx 0x5eb34400
2015-06-29 12:35:19.617943 7fd8b1a71700 10 osd.1 pg_epoch: 26335 pg[0.12f( v 26335'1299179 (24993'1296145,26335'1299179] local-les=26276 n=5044 ec=1 les/c 26276/26309 26264/26270/26270) [1,18,40] r=0 lpr=26270 crt=26335'1299176 lcod 26335'1299178 mlcod 26335'1299178 active+clean] do_op b974752f/rb.0.1ba70.238e1f29.00000000ad7a/head//0 [read 524288~65536] ov 2160'454478
2015-06-29 12:35:19.617991 7fd8b1a71700 10 osd.1 pg_epoch: 26335 pg[0.12f( v 26335'1299179 (24993'1296145,26335'1299179] local-les=26276 n=5044 ec=1 les/c 26276/26309 26264/26270/26270) [1,18,40] r=0 lpr=26270 crt=26335'1299176 lcod 26335'1299178 mlcod 26335'1299178 active+clean]  taking ondisk_read_lock
2015-06-29 12:35:19.618003 7fd8b1a71700 10 osd.1 pg_epoch: 26335 pg[0.12f( v 26335'1299179 (24993'1296145,26335'1299179] local-les=26276 n=5044 ec=1 les/c 26276/26309 26264/26270/26270) [1,18,40] r=0 lpr=26270 crt=26335'1299176 lcod 26335'1299178 mlcod 26335'1299178 active+clean] do_osd_op b974752f/rb.0.1ba70.238e1f29.00000000ad7a/head//0 [read 524288~65536]
2015-06-29 12:35:19.618014 7fd8b1a71700 10 osd.1 pg_epoch: 26335 pg[0.12f( v 26335'1299179 (24993'1296145,26335'1299179] local-les=26276 n=5044 ec=1 les/c 26276/26309 26264/26270/26270) [1,18,40] r=0 lpr=26270 crt=26335'1299176 lcod 26335'1299178 mlcod 26335'1299178 active+clean] do_osd_op  read 524288~65536
2015-06-29 12:35:19.618432 7fd8b1a71700 10 osd.1 pg_epoch: 26335 pg[0.12f( v 26335'1299179 (24993'1296145,26335'1299179] local-les=26276 n=5044 ec=1 les/c 26276/26309 26264/26270/26270) [1,18,40] r=0 lpr=26270 crt=26335'1299176 lcod 26335'1299178 mlcod 26335'1299178 active+clean]  read got 65536 / 65536 bytes from obj b974752f/rb.0.1ba70.238e1f29.00000000ad7a/head//0
2015-06-29 12:35:19.618555 7fd8b1a71700 10 osd.1 pg_epoch: 26335 pg[0.12f( v 26335'1299179 (24993'1296145,26335'1299179] local-les=26276 n=5044 ec=1 les/c 26276/26309 26264/26270/26270) [1,18,40] r=0 lpr=26270 crt=26335'1299176 lcod 26335'1299178 mlcod 26335'1299178 active+clean]  dropping ondisk_read_lock
2015-06-29 12:35:19.618733 7fd8b1a71700 15 osd.1 pg_epoch: 26335 pg[0.12f( v 26335'1299179 (24993'1296145,26335'1299179] local-les=26276 n=5044 ec=1 les/c 26276/26309 26264/26270/26270) [1,18,40] r=0 lpr=26270 crt=26335'1299176 lcod 26335'1299178 mlcod 26335'1299178 active+clean] do_osd_op_effects client.2808923 con 0x6d9ad760
2015-06-29 12:35:19.618803 7fd8b1a71700 15 osd.1 pg_epoch: 26335 pg[0.12f( v 26335'1299179 (24993'1296145,26335'1299179] local-les=26276 n=5044 ec=1 les/c 26276/26309 26264/26270/26270) [1,18,40] r=0 lpr=26270 crt=26335'1299176 lcod 26335'1299178 mlcod 26335'1299178 active+clean] log_op_stats osd_op(client.2808923.0:145 rb.0.1ba70.238e1f29.00000000ad7a [read 524288~65536] 0.b974752f ack+read+known_if_redirected e26335) v5 inb 0 outb 65536 rlat 0.000000 lat 0.067538
2015-06-29 12:35:19.618983 7fd8b1a71700 15 osd.1 pg_epoch: 26335 pg[0.12f( v 26335'1299179 (24993'1296145,26335'1299179] local-les=26276 n=5044 ec=1 les/c 26276/26309 26264/26270/26270) [1,18,40] r=0 lpr=26270 crt=26335'1299176 lcod 26335'1299178 mlcod 26335'1299178 active+clean] publish_stats_to_osd 26335:1206609
2015-06-29 12:35:19.619143 7fd8b1a71700 15 osd.1 pg_epoch: 26335 pg[0.12f( v 26335'1299179 (24993'1296145,26335'1299179] local-les=26276 n=5044 ec=1 les/c 26276/26309 26264/26270/26270) [1,18,40] r=0 lpr=26270 crt=26335'1299176 lcod 26335'1299178 mlcod 26335'1299178 active+clean]  requeue_ops
2015-06-29 12:35:19.619170 7fd8b1a71700 10 osd.1 26335 dequeue_op 0x6b9a4d00 finish


Second IO

2015-06-29 12:35:23.928222 7fd8a7d44700 15 osd.1 26335 enqueue_op 0x6b9a4900 prio 63 cost 0 latency 0.000381 osd_op(client.2808923.0:188 rb.0.1ba70.238e1f29.00000000264a [read 720896~65536] 0.52c31661 ack+read+known_if_redirected e26335) v5
2015-06-29 12:35:23.928291 7fd8b3a75700 10 osd.1 26335 dequeue_op 0x6b9a4900 prio 63 cost 0 latency 0.000450 osd_op(client.2808923.0:188 rb.0.1ba70.238e1f29.00000000264a [read 720896~65536] 0.52c31661 ack+read+known_if_redirected e26335) v5 pg pg[0.261( v 26335'1369662 (24993'1366661,26335'1369662] local-les=26276 n=5050 ec=1 les/c 26276/26311 26264/26270/26270) [1,23,11] r=0 lpr=26270 crt=26335'1369659 lcod 26335'1369661 mlcod 26335'1369661 active+clean]
2015-06-29 12:35:23.928676 7fd8b3a75700 20 osd.1 pg_epoch: 26335 pg[0.261( v 26335'1369662 (24993'1366661,26335'1369662] local-les=26276 n=5050 ec=1 les/c 26276/26311 26264/26270/26270) [1,23,11] r=0 lpr=26270 crt=26335'1369659 lcod 26335'1369661 mlcod 26335'1369661 active+clean] op_has_sufficient_caps pool=0 (rbd ) owner=0 need_read_cap=1 need_write_cap=0 need_class_read_cap=0 need_class_write_cap=0 -> yes
2015-06-29 12:35:23.928910 7fd8b3a75700 10 osd.1 pg_epoch: 26335 pg[0.261( v 26335'1369662 (24993'1366661,26335'1369662] local-les=26276 n=5050 ec=1 les/c 26276/26311 26264/26270/26270) [1,23,11] r=0 lpr=26270 crt=26335'1369659 lcod 26335'1369661 mlcod 26335'1369661 active+clean] handle_message: 0x6b9a4900
2015-06-29 12:35:23.928929 7fd8b3a75700 10 osd.1 pg_epoch: 26335 pg[0.261( v 26335'1369662 (24993'1366661,26335'1369662] local-les=26276 n=5050 ec=1 les/c 26276/26311 26264/26270/26270) [1,23,11] r=0 lpr=26270 crt=26335'1369659 lcod 26335'1369661 mlcod 26335'1369661 active+clean] do_op osd_op(client.2808923.0:188 rb.0.1ba70.238e1f29.00000000264a [read 720896~65536] 0.52c31661 ack+read+known_if_redirected e26335) v5 may_read -> read-ordered flags ack+read+known_if_redirected
2015-06-29 12:35:23.929375 7fd8b3a75700 10 osd.1 pg_epoch: 26335 pg[0.261( v 26335'1369662 (24993'1366661,26335'1369662] local-les=26276 n=5050 ec=1 les/c 26276/26311 26264/26270/26270) [1,23,11] r=0 lpr=26270 crt=26335'1369659 lcod 26335'1369661 mlcod 26335'1369661 active+clean] get_object_context: obc NOT found in cache: 52c31661/rb.0.1ba70.238e1f29.00000000264a/head//0
2015-06-29 12:35:23.942620 7fd8be28a700 20 osd.1 26335 share_map_peer 0x5e5ad180 already has epoch 26335
2015-06-29 12:35:23.942694 7fd8bfa8d700 20 osd.1 26335 share_map_peer 0x5e5ad180 already has epoch 26335
2015-06-29 12:35:24.055268 7fd8d7ae5700  5 osd.1 26335 tick
2015-06-29 12:35:24.055298 7fd8d7ae5700 20 osd.1 26335 scrub_random_backoff lost coin flip, randomly backing off
2015-06-29 12:35:24.055301 7fd8d7ae5700 10 osd.1 26335 do_waiters -- start
2015-06-29 12:35:24.055303 7fd8d7ae5700 10 osd.1 26335 do_waiters -- finish
2015-06-29 12:35:24.068422 7fd8b3a75700 10 osd.1 pg_epoch: 26335 pg[0.261( v 26335'1369662 (24993'1366661,26335'1369662] local-les=26276 n=5050 ec=1 les/c 26276/26311 26264/26270/26270) [1,23,11] r=0 lpr=26270 crt=26335'1369659 lcod 26335'1369661 mlcod 26335'1369661 active+clean] populate_obc_watchers 52c31661/rb.0.1ba70.238e1f29.00000000264a/head//0
2015-06-29 12:35:24.068563 7fd8b3a75700 20 osd.1 pg_epoch: 26335 pg[0.261( v 26335'1369662 (24993'1366661,26335'1369662] local-les=26276 n=5050 ec=1 les/c 26276/26311 26264/26270/26270) [1,23,11] r=0 lpr=26270 crt=26335'1369659 lcod 26335'1369661 mlcod 26335'1369661 active+clean] ReplicatedPG::check_blacklisted_obc_watchers for obc 52c31661/rb.0.1ba70.238e1f29.00000000264a/head//0
2015-06-29 12:35:24.068667 7fd8b3a75700 10 osd.1 pg_epoch: 26335 pg[0.261( v 26335'1369662 (24993'1366661,26335'1369662] local-les=26276 n=5050 ec=1 les/c 26276/26311 26264/26270/26270) [1,23,11] r=0 lpr=26270 crt=26335'1369659 lcod 26335'1369661 mlcod 26335'1369661 active+clean] get_object_context: creating obc from disk: 0x641a3180
2015-06-29 12:35:24.068693 7fd8b3a75700 10 osd.1 pg_epoch: 26335 pg[0.261( v 26335'1369662 (24993'1366661,26335'1369662] local-les=26276 n=5050 ec=1 les/c 26276/26311 26264/26270/26270) [1,23,11] r=0 lpr=26270 crt=26335'1369659 lcod 26335'1369661 mlcod 26335'1369661 active+clean] get_object_context: 0x641a3180 52c31661/rb.0.1ba70.238e1f29.00000000264a/head//0 rwstate(none n=0 w=0) oi: 52c31661/rb.0.1ba70.238e1f29.00000000264a/head//0(2700'867765 osd.1.0:3390976 wrlock_by=unknown.0.0:0 dirty|data_digest|omap_digest s 1048576 uv 802272 dd 3b309dc0 od ffffffff) ssc: 0x15842c30 snapset: 0=[]:[]+head
2015-06-29 12:35:24.068812 7fd8b3a75700 10 osd.1 pg_epoch: 26335 pg[0.261( v 26335'1369662 (24993'1366661,26335'1369662] local-les=26276 n=5050 ec=1 les/c 26276/26311 26264/26270/26270) [1,23,11] r=0 lpr=26270 crt=26335'1369659 lcod 26335'1369661 mlcod 26335'1369661 active+clean] find_object_context 52c31661/rb.0.1ba70.238e1f29.00000000264a/head//0 @head oi=52c31661/rb.0.1ba70.238e1f29.00000000264a/head//0(2700'867765 osd.1.0:3390976 wrlock_by=unknown.0.0:0 dirty|data_digest|omap_digest s 1048576 uv 802272 dd 3b309dc0 od ffffffff)
2015-06-29 12:35:24.069048 7fd8b3a75700 10 osd.1 pg_epoch: 26335 pg[0.261( v 26335'1369662 (24993'1366661,26335'1369662] local-les=26276 n=5050 ec=1 les/c 26276/26311 26264/26270/26270) [1,23,11] r=0 lpr=26270 crt=26335'1369659 lcod 26335'1369661 mlcod 26335'1369661 active+clean] execute_ctx 0x6e47ca00
2015-06-29 12:35:24.069136 7fd8b3a75700 10 osd.1 pg_epoch: 26335 pg[0.261( v 26335'1369662 (24993'1366661,26335'1369662] local-les=26276 n=5050 ec=1 les/c 26276/26311 26264/26270/26270) [1,23,11] r=0 lpr=26270 crt=26335'1369659 lcod 26335'1369661 mlcod 26335'1369661 active+clean] do_op 52c31661/rb.0.1ba70.238e1f29.00000000264a/head//0 [read 720896~65536] ov 2700'867765
2015-06-29 12:35:24.069275 7fd8b3a75700 10 osd.1 pg_epoch: 26335 pg[0.261( v 26335'1369662 (24993'1366661,26335'1369662] local-les=26276 n=5050 ec=1 les/c 26276/26311 26264/26270/26270) [1,23,11] r=0 lpr=26270 crt=26335'1369659 lcod 26335'1369661 mlcod 26335'1369661 active+clean]  taking ondisk_read_lock
2015-06-29 12:35:24.069322 7fd8b3a75700 10 osd.1 pg_epoch: 26335 pg[0.261( v 26335'1369662 (24993'1366661,26335'1369662] local-les=26276 n=5050 ec=1 les/c 26276/26311 26264/26270/26270) [1,23,11] r=0 lpr=26270 crt=26335'1369659 lcod 26335'1369661 mlcod 26335'1369661 active+clean] do_osd_op 52c31661/rb.0.1ba70.238e1f29.00000000264a/head//0 [read 720896~65536]
2015-06-29 12:35:24.069335 7fd8b3a75700 10 osd.1 pg_epoch: 26335 pg[0.261( v 26335'1369662 (24993'1366661,26335'1369662] local-les=26276 n=5050 ec=1 les/c 26276/26311 26264/26270/26270) [1,23,11] r=0 lpr=26270 crt=26335'1369659 lcod 26335'1369661 mlcod 26335'1369661 active+clean] do_osd_op  read 720896~65536
2015-06-29 12:35:24.072584 7fd8be28a700 20 osd.1 26335 share_map_peer 0x5e5ac680 already has epoch 26335
2015-06-29 12:35:24.072677 7fd8bfa8d700 20 osd.1 26335 share_map_peer 0x5e5ac680 already has epoch 26335
2015-06-29 12:35:24.074427 7fd8be28a700 20 osd.1 26335 share_map_peer 0x67792520 already has epoch 26335
2015-06-29 12:35:24.074461 7fd8bfa8d700 20 osd.1 26335 share_map_peer 0x67792520 already has epoch 26335
2015-06-29 12:35:24.083945 7fd8b3a75700 10 osd.1 pg_epoch: 26335 pg[0.261( v 26335'1369662 (24993'1366661,26335'1369662] local-les=26276 n=5050 ec=1 les/c 26276/26311 26264/26270/26270) [1,23,11] r=0 lpr=26270 crt=26335'1369659 lcod 26335'1369661 mlcod 26335'1369661 active+clean]  read got 65536 / 65536 bytes from obj 52c31661/rb.0.1ba70.238e1f29.00000000264a/head//0
2015-06-29 12:35:24.084056 7fd8b3a75700 10 osd.1 pg_epoch: 26335 pg[0.261( v 26335'1369662 (24993'1366661,26335'1369662] local-les=26276 n=5050 ec=1 les/c 26276/26311 26264/26270/26270) [1,23,11] r=0 lpr=26270 crt=26335'1369659 lcod 26335'1369661 mlcod 26335'1369661 active+clean]  dropping ondisk_read_lock
2015-06-29 12:35:24.084094 7fd8b3a75700 15 osd.1 pg_epoch: 26335 pg[0.261( v 26335'1369662 (24993'1366661,26335'1369662] local-les=26276 n=5050 ec=1 les/c 26276/26311 26264/26270/26270) [1,23,11] r=0 lpr=26270 crt=26335'1369659 lcod 26335'1369661 mlcod 26335'1369661 active+clean] do_osd_op_effects client.2808923 con 0x6d9ad760
2015-06-29 12:35:24.084183 7fd8b3a75700 15 osd.1 pg_epoch: 26335 pg[0.261( v 26335'1369662 (24993'1366661,26335'1369662] local-les=26276 n=5050 ec=1 les/c 26276/26311 26264/26270/26270) [1,23,11] r=0 lpr=26270 crt=26335'1369659 lcod 26335'1369661 mlcod 26335'1369661 active+clean] log_op_stats osd_op(client.2808923.0:188 rb.0.1ba70.238e1f29.00000000264a [read 720896~65536] 0.52c31661 ack+read+known_if_redirected e26335) v5 inb 0 outb 65536 rlat 0.000000 lat 0.156341
2015-06-29 12:35:24.084499 7fd8b3a75700 15 osd.1 pg_epoch: 26335 pg[0.261( v 26335'1369662 (24993'1366661,26335'1369662] local-les=26276 n=5050 ec=1 les/c 26276/26311 26264/26270/26270) [1,23,11] r=0 lpr=26270 crt=26335'1369659 lcod 26335'1369661 mlcod 26335'1369661 active+clean] publish_stats_to_osd 26335:1261700
2015-06-29 12:35:24.084660 7fd8b3a75700 15 osd.1 pg_epoch: 26335 pg[0.261( v 26335'1369662 (24993'1366661,26335'1369662] local-les=26276 n=5050 ec=1 les/c 26276/26311 26264/26270/26270) [1,23,11] r=0 lpr=26270 crt=26335'1369659 lcod 26335'1369661 mlcod 26335'1369661 active+clean]  requeue_ops
2015-06-29 12:35:24.084700 7fd8b3a75700 10 osd.1 26335 dequeue_op 0x6b9a4900 finish



> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> Somnath Roy
> Sent: 21 June 2015 06:03
> To: Nick Fisk
> Cc: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  Old vs New pool on same OSDs - Performance
> Difference
> 
> What release you are using ?
> filestore_xattr_use_omap  is deprecated long back..
> 
> Thanks & Regards
> Somnath
> 




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux