2017-04-07 14:24+0200, Radim Krčmář: > 2017-04-07 12:55+0200, Christian Borntraeger: >> On 04/06/2017 10:20 PM, Radim Krčmář wrote: >>> static inline bool kvm_check_request(int req, struct kvm_vcpu *vcpu) >>> { >>> - if (test_bit(req, &vcpu->requests)) { >>> - clear_bit(req, &vcpu->requests); >>> + if (kvm_test_request(req, vcpu)) { >>> + kvm_clear_request(req, vcpu); >> >> This looks fine. I am just asking myself why we do not use >> test_and_clear_bit? Do we expect gcc to merge all test bits as >> a fast path? This does not seem to work as far as I can tell and >> almost everybody does a fast path like in > > test_and_clear_bit() is a slower operation even if the test is false (at > least on x86), because it needs to be fully atomic. > >> arch/s390/kvm/kvm-s390.c: >> if (!vcpu->requests) >> return 0; >> >> arch/x86/kvm/x86.c: >> if (vcpu->requests) { > > We'll mostly have only one request set, so splitting the test_and_clear > improves the performance of many subsequent tests_and_clear()s even if > the compiler doesn't optimize. > > GCC couldn't even optimize if we used test_and_clear_bit(), because that > instruction adds barriers, but the forward check for vcpu->requests is > there because we do not trust the optimizer to do it for us and it would > make a big difference. Ugh, I started thinking that bitops are not atomic because I looked at wrong boot/bitops.h by mistake. The compiler cannot merge test_bit()s, but the speed difference holds.