> > $ perf record -a -f -g > $ perf report -g here we go: here we go: - 49.72% _raw_spin_lock â - 32.32% kvm_mmu_pte_write â - 98.02% emulator_write_phys â emulator_write_emulated_onepage â emulator_write_emulated â x86_emulate_insn â emulate_instruction â kvm_mmu_page_fault â handle_exception â vmx_handle_exit â kvm_arch_vcpu_ioctl_run â kvm_vcpu_ioctl â vfs_ioctl â do_vfs_ioctl â sys_ioctl â system_call_fastpath â __GI_ioctl â - 1.98% paging64_invlpg â kvm_mmu_invlpg â handle_invlpg â vmx_handle_exit â kvm_arch_vcpu_ioctl_run â kvm_vcpu_ioctl â vfs_ioctl â do_vfs_ioctl â sys_ioctl â system_call_fastpath â __GI_ioctl â - 23.66% task_rq_lock â - try_to_wake_up â - 94.76% wake_up_process â cpu_stop_queue_work â __stop_cpus â try_stop_cpus â synchronize_sched_expedited â __synchronize_srcu â synchronize_srcu_expedited â __kvm_set_memory_region â kvm_set_memory_region â kvm_vm_ioctl_set_memory_region â kvm_vm_ioctl â vfs_ioctl â do_vfs_ioctl â sys_ioctl â system_call_fastpath â __GI_ioctl â - 4.44% default_wake_function â - 53.81% autoremove_wake_function â __wake_up_common â __wake_up â kvm_vcpu_kick â __apic_accept_irq â kvm_apic_set_irq â - kvm_irq_delivery_to_apic â - 58.41% apic_reg_write â apic_mmio_write â emulator_write_emulated_onepage â emulator_write_emulated â x86_emulate_insn â emulate_instruction â kvm_mmu_page_fault â handle_exception â vmx_handle_exit â kvm_arch_vcpu_ioctl_run â kvm_vcpu_ioctl â vfs_ioctl â do_vfs_ioctl â sys_ioctl â system_call_fastpath â __GI_ioctl â - 29.60% ioapic_service â kvm_ioapic_set_irq â kvm_set_ioapic_irq â kvm_set_irq â kvm_arch_vm_ioctl â kvm_vm_ioctl â vfs_ioctl â do_vfs_ioctl â sys_ioctl â system_call_fastpath â __GI_ioctl â - 11.99% kvm_set_msi â kvm_set_irq â kvm_arch_vm_ioctl â kvm_vm_ioctl â vfs_ioctl â do_vfs_ioctl â sys_ioctl â system_call_fastpath â __GI_ioctl â - 46.19% pollwake â __wake_up_common â - __wake_up â - 77.30% __send_signal â send_signal â - do_send_sig_info â - 68.83% do_send_specific â do_tkill â sys_tgkill â system_call_fastpath â __pthread_kill â - 31.17% group_send_sig_info â kill_pid_info â sys_kill â system_call_fastpath â __kill â - 22.70% send_sigqueue â posix_timer_event â posix_timer_fn â __run_hrtimer â hrtimer_interrupt â smp_apic_timer_interrupt â apic_timer_interrupt â kvm_arch_commit_memory_region â __kvm_set_memory_region â kvm_set_memory_region â kvm_vm_ioctl_set_memory_region â kvm_vm_ioctl â vfs_ioctl â do_vfs_ioctl â sys_ioctl â system_call_fastpath â __GI_ioctl â - 0.80% wake_up_state â - wake_futex â - 68.01% do_futex â sys_futex â system_call_fastpath â __pthread_cond_signal â - 31.99% futex_wake â do_futex â sys_futex â system_call_fastpath â __lll_unlock_wake â - 11.51% mmu_free_roots â - 96.89% kvm_mmu_unload â kvm_arch_vcpu_ioctl_run â kvm_vcpu_ioctl â vfs_ioctl â do_vfs_ioctl â sys_ioctl â system_call_fastpath â __GI_ioctl â - 3.11% paging_new_cr3 â kvm_set_cr3 â handle_cr â vmx_handle_exit â kvm_arch_vcpu_ioctl_run â kvm_vcpu_ioctl â vfs_ioctl â do_vfs_ioctl â sys_ioctl â system_call_fastpath â __GI_ioctl â - 6.78% paging64_page_fault â kvm_mmu_page_fault â handle_exception â vmx_handle_exit â kvm_arch_vcpu_ioctl_run â kvm_vcpu_ioctl â vfs_ioctl â do_vfs_ioctl â sys_ioctl â system_call_fastpath â __GI_ioctl â - 5.72% make_all_cpus_request â - 62.13% kvm_reload_remote_mmus â - kvm_mmu_prepare_zap_page â - 96.56% kvm_mmu_zap_all â kvm_arch_flush_shadow â __kvm_set_memory_region â kvm_set_memory_region â kvm_vm_ioctl_set_memory_region â kvm_vm_ioctl â vfs_ioctl â do_vfs_ioctl â sys_ioctl â system_call_fastpath â __GI_ioctl â - 3.44% kvm_mmu_pte_write â emulator_write_phys â emulator_write_emulated_onepage â emulator_write_emulated â x86_emulate_insn â emulate_instruction â kvm_mmu_page_fault â handle_exception â vmx_handle_exit â kvm_arch_vcpu_ioctl_run â kvm_vcpu_ioctl â vfs_ioctl â do_vfs_ioctl â sys_ioctl â system_call_fastpath â __GI_ioctl â . . it's not all, is this enough? or can I simply export whole tree somehow? > > will show who calls do_raw_spin_lock. > >> 235.00 4.9% send_mono_rect /usr/bin/qemu-kvm >> 215.00 4.5% rb_next [kernel.kallsyms] >> 166.00 3.5% schedule [kernel.kallsyms] > > What's the context switch rate? 'vmstat 1' procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 3 0 0 168920 5841008 4614064 0 0 28 21 32 28 15 22 63 0 0 2 0 0 168820 5841008 4614088 0 0 0 0 13489 76739 15 16 69 0 0 2 0 0 167656 5841008 4614088 0 0 0 104 6089 33390 16 13 71 0 0 3 0 0 169184 5841008 4614088 0 0 0 0 12489 71263 17 15 69 0 0 2 0 0 169200 5841012 4614092 0 0 0 8 7034 33908 17 12 72 0 0 2 0 0 169432 5841024 4614080 0 0 0 16 10924 67008 16 12 72 0 0 2 0 0 168084 5841028 4614088 0 0 0 4 8955 47767 17 13 71 0 0 2 0 0 168936 5841032 4614088 0 0 0 80 9528 50119 16 13 71 0 0 . . > >> 141.00 3.0% add_preempt_count [kernel.kallsyms] >> 137.00 2.9% gen_rotc_rm_T1 /usr/bin/qemu-kvm > > Do you have a guest running with kvm disabled?! nope, all seem to be using KVM > > -- ------------------------------------- Ing. Nikola CIPRICH LinuxBox.cz, s.r.o. 28. rijna 168, 709 01 Ostrava tel.: +420 596 603 142 fax: +420 596 621 273 mobil: +420 777 093 799 www.linuxbox.cz mobil servis: +420 737 238 656 email servis: servis@xxxxxxxxxxx ------------------------------------- -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html