http://bugzilla.kernel.org/show_bug.cgi?id=13493 yury <urykhy@xxxxxxxxx> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |urykhy@xxxxxxxxx --- Comment #4 from yury <urykhy@xxxxxxxxx> 2009-06-10 16:50:36 --- also in 2.6.30 ======================================================= [ INFO: possible circular locking dependency detected ] 2.6.30 #1 ------------------------------------------------------- hald-addon-cpuf/2938 is trying to acquire lock: (&(&dbs_info->work)->work){+.+...}, at: [<c012d218>] __cancel_work_timer+0x8f/0x131 but task is already holding lock: (dbs_mutex){+.+.+.}, at: [<c02aebe1>] cpufreq_governor_dbs+0x231/0x2bc which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #2 (dbs_mutex){+.+.+.}: [<c013e76c>] __lock_acquire+0xf9b/0x12a4 [<c013eb07>] lock_acquire+0x92/0xb4 [<c0341a5c>] mutex_lock_nested+0x39/0x25d [<c02aea02>] cpufreq_governor_dbs+0x52/0x2bc [<c02ace63>] __cpufreq_governor+0x5d/0x91 [<c02acf97>] __cpufreq_set_policy+0xe7/0x11f [<c02adcd7>] cpufreq_add_dev+0x22a/0x2bc [<c02772c8>] sysdev_driver_register+0x96/0xe5 [<c02ad353>] cpufreq_register_driver+0x7c/0xd6 [<f8b2e080>] 0xf8b2e080 [<c0101131>] _stext+0x49/0x119 [<c01460ef>] sys_init_module+0x89/0x192 [<c01030a8>] sysenter_do_call+0x12/0x3c [<ffffffff>] 0xffffffff -> #1 (&per_cpu(cpu_policy_rwsem, cpu)){+++++.}: [<c013e76c>] __lock_acquire+0xf9b/0x12a4 [<c013eb07>] lock_acquire+0x92/0xb4 [<c03421d4>] down_write+0x29/0x63 [<c02ad9da>] lock_policy_rwsem_write+0x1d/0x33 [<c02ae78e>] do_dbs_timer+0x36/0x258 [<c012cef0>] worker_thread+0x189/0x256 [<c012fbac>] kthread+0x42/0x67 [<c01038ff>] kernel_thread_helper+0x7/0x10 [<ffffffff>] 0xffffffff -> #0 (&(&dbs_info->work)->work){+.+...}: [<c013e4fc>] __lock_acquire+0xd2b/0x12a4 [<c013eb07>] lock_acquire+0x92/0xb4 [<c012d234>] __cancel_work_timer+0xab/0x131 [<c012d2c5>] cancel_delayed_work_sync+0xb/0xd [<c02aebf2>] cpufreq_governor_dbs+0x242/0x2bc [<c02ace63>] __cpufreq_governor+0x5d/0x91 [<c02acf81>] __cpufreq_set_policy+0xd1/0x11f [<c02ad822>] store_scaling_governor+0x197/0x1bf [<c02addb1>] store+0x48/0x61 [<c01a96f2>] sysfs_write_file+0xb9/0xe4 [<c0175884>] vfs_write+0x8a/0x11c [<c01759af>] sys_write+0x3b/0x60 [<c01030a8>] sysenter_do_call+0x12/0x3c [<ffffffff>] 0xffffffff other info that might help us debug this: 3 locks held by hald-addon-cpuf/2938: #0: (&buffer->mutex){+.+.+.}, at: [<c01a965e>] sysfs_write_file+0x25/0xe4 #1: (&per_cpu(cpu_policy_rwsem, cpu)){+++++.}, at: [<c02ad9da>] lock_policy_rwsem_write+0x1d/0x33 #2: (dbs_mutex){+.+.+.}, at: [<c02aebe1>] cpufreq_governor_dbs+0x231/0x2bc stack backtrace: Pid: 2938, comm: hald-addon-cpuf Not tainted 2.6.30 #1 Call Trace: [<c03409ec>] ? printk+0xf/0x13 [<c013d3fe>] print_circular_bug_tail+0xa2/0xad [<c013e4fc>] __lock_acquire+0xd2b/0x12a4 [<c013eb07>] lock_acquire+0x92/0xb4 [<c012d218>] ? __cancel_work_timer+0x8f/0x131 [<c012d234>] __cancel_work_timer+0xab/0x131 [<c012d218>] ? __cancel_work_timer+0x8f/0x131 [<c013cae1>] ? mark_held_locks+0x43/0x5b [<c0341c68>] ? mutex_lock_nested+0x245/0x25d [<c013cd37>] ? trace_hardirqs_on_caller+0x101/0x122 [<c0341c78>] ? mutex_lock_nested+0x255/0x25d [<c02aebe1>] ? cpufreq_governor_dbs+0x231/0x2bc [<c012d2c5>] cancel_delayed_work_sync+0xb/0xd [<c02aebf2>] cpufreq_governor_dbs+0x242/0x2bc [<c02ace63>] __cpufreq_governor+0x5d/0x91 [<c02acf81>] __cpufreq_set_policy+0xd1/0x11f [<c02ad822>] store_scaling_governor+0x197/0x1bf [<c02adea0>] ? handle_update+0x0/0xd [<c02ad68b>] ? store_scaling_governor+0x0/0x1bf [<c02addb1>] store+0x48/0x61 [<c01a96f2>] sysfs_write_file+0xb9/0xe4 [<c01a9639>] ? sysfs_write_file+0x0/0xe4 [<c0175884>] vfs_write+0x8a/0x11c [<c01759af>] sys_write+0x3b/0x60 [<c01030a8>] sysenter_do_call+0x12/0x3c -- Configure bugmail: http://bugzilla.kernel.org/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are the assignee for the bug. -- To unsubscribe from this list: send the line "unsubscribe cpufreq" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html