Re: [kernel-hardening] [PATCH v8 3/3] x86/refcount: Implement fast refcount overflow protection

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Kees,


on 2017/7/25 2:35, Kees Cook wrote:
+static __always_inline __must_check
+int __refcount_add_unless(refcount_t *r, int a, int u)
+{
+	int c, new;
+
+	c = atomic_read(&(r->refs));
+	do {
+		if (unlikely(c == u))
+			break;
+
+		asm volatile("addl %2,%0\n\t"
+			REFCOUNT_CHECK_LT_ZERO
+			: "=r" (new)
+			: "0" (c), "ir" (a),
+			  [counter] "m" (r->refs.counter)
+			: "cc", "cx");
here when the result LT_ZERO, you will saturate the r->refs.counter and make the

atomic_try_cmpxchg(&(r->refs), &c, new) bound to fail first time.

maybe we can just saturate the value of variable "new" ?

+
+	} while (!atomic_try_cmpxchg(&(r->refs), &c, new));
+
+	return c;
+}
+

--
Best Regards
Li Kun




[Index of Archives]     [Linux Kernel]     [Kernel Newbies]     [x86 Platform Driver]     [Netdev]     [Linux Wireless]     [Netfilter]     [Bugtraq]     [Linux Filesystems]     [Yosemite Discussion]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]

  Powered by Linux