Re: [PATCH v6 2/2] ipc/msg: mitigate the lock contention with percpu counter

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 9/20/22 04:36, Sun, Jiebin wrote:

On 9/18/2022 8:53 PM, Manfred Spraul wrote:
Hi Jiebin,

On 9/13/22 21:25, Jiebin Sun wrote:
The msg_bytes and msg_hdrs atomic counters are frequently
updated when IPC msg queue is in heavy use, causing heavy
cache bounce and overhead. Change them to percpu_counter
greatly improve the performance. Since there is one percpu
struct per namespace, additional memory cost is minimal.
Reading of the count done in msgctl call, which is infrequent.
So the need to sum up the counts in each CPU is infrequent.

Apply the patch and test the pts/stress-ng-1.4.0
-- system v message passing (160 threads).

Score gain: 3.99x

CPU: ICX 8380 x 2 sockets
Core number: 40 x 2 physical cores
Benchmark: pts/stress-ng-1.4.0
-- system v message passing (160 threads)

Signed-off-by: Jiebin Sun <jiebin.sun@xxxxxxxxx>
Reviewed-by: Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx>
Reviewed-by: Manfred Spraul <manfred@xxxxxxxxxxxxxxx>
@@ -495,17 +496,18 @@ static int msgctl_info(struct ipc_namespace *ns, int msqid,
      msginfo->msgssz = MSGSSZ;
      msginfo->msgseg = MSGSEG;
      down_read(&msg_ids(ns).rwsem);
-    if (cmd == MSG_INFO) {
+    if (cmd == MSG_INFO)
          msginfo->msgpool = msg_ids(ns).in_use;
-        msginfo->msgmap = atomic_read(&ns->msg_hdrs);
-        msginfo->msgtql = atomic_read(&ns->msg_bytes);
+    max_idx = ipc_get_maxidx(&msg_ids(ns));
+    up_read(&msg_ids(ns).rwsem);
+    if (cmd == MSG_INFO) {
+        msginfo->msgmap = percpu_counter_sum(&ns->percpu_msg_hdrs);
+        msginfo->msgtql = percpu_counter_sum(&ns->percpu_msg_bytes);

Not caused by your change, it just now becomes obvious:

msginfo->msgmap and ->msgtql are type int, i.e. signed 32-bit, and the actual counters are 64-bit. This can overflow - and I think the code should handle this. Just clamp the values to INT_MAX.

Hi Manfred,

Thanks for your advice. But I'm not sure if we could fix the overflow issue in ipc/msg totally by

clamp(val, low, INT_MAX). If the value is over s32, we might avoid the reversal sign, but still could

not get the accurate value.

I think just clamping it to INT_MAX is the best approach.
Reporting negative values is worse than clamping. If (and only if) there are real users that need to know the total amount of memory allocated for messages queues in one namespace, then we could add a MSG_INFO64 with long values. But I would not add that right now, I do not see a real use case where the value would be needed.

Any other opinions?

--

    Manfred





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux