Re: [PATCH v2] mm/shuffle.c: Fix races in add_to_free_area_random()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 18, 2020 at 12:17:14PM -0700, Alexander Duyck wrote:
> I was just putting it out there as a possibility. What I have seen in
> the past is that under some circumstances gcc can be smart enough to
> interpret that as a "branch on carry". My thought was you are likely
> having to test the value against itself and then you might be able to
> make use of shift and carry flag to avoid that. In addition you could
> get away from having to recast a unsigned value as a signed one in
> order to perform the bit test.

Ah, yes, it would be nice if gcc could use the carry bit for r
rather than having to devote a whole register to it.  But it has
to do two unrelated flag tests (zero and carry), and it's generally
pretty bad at taking advantage of preserved flag bits like that.

My ideal x86-64 object code would be:
	shlq	rand(%rip)
	jz	fixup
fixed:
	jnc	tail
	jmp	add_to_free_area
tail:
	jmp	add_to_free_area_tail
fixup:
	pushq	%rdx
	pushq	%rsi
	pushq	%rdi
	call	get_random_u64
	popq	%rdi
	popq	%rsi
	popq	%rdx
	stc
	adcq	%rax,%rax
	movq	%rax, rand(%rip)
	jmp	fixed

... but I don't know how to induce GCC to generate that, and
the function doesn't seem worthy of platform-specific asm.

(Note that I have to use add add on the slow path because lea doesn't
set the carry bit.)




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux