On 30/5/24 21:42, Alex Bennée wrote:
Aside from the round robin threads this is all common code. By
moving the halt_cond setup we also no longer need hacks to work around
the race between QOM object creation and thread creation.
It is a little ugly to free stuff up for the round robin thread but
better it deal with its own specialises than making the other
accelerators jump through hoops.
Signed-off-by: Alex Bennée <alex.bennee@xxxxxxxxxx>
---
include/hw/core/cpu.h | 4 ++++
accel/dummy-cpus.c | 3 ---
accel/hvf/hvf-accel-ops.c | 4 ----
accel/kvm/kvm-accel-ops.c | 3 ---
accel/tcg/tcg-accel-ops-mttcg.c | 4 ----
accel/tcg/tcg-accel-ops-rr.c | 14 +++++++-------
hw/core/cpu-common.c | 5 +++++
target/i386/nvmm/nvmm-accel-ops.c | 3 ---
target/i386/whpx/whpx-accel-ops.c | 3 ---
9 files changed, 16 insertions(+), 27 deletions(-)
diff --git a/accel/tcg/tcg-accel-ops-rr.c b/accel/tcg/tcg-accel-ops-rr.c
index 894e73e52c..84c36c1450 100644
--- a/accel/tcg/tcg-accel-ops-rr.c
+++ b/accel/tcg/tcg-accel-ops-rr.c
@@ -317,22 +317,22 @@ void rr_start_vcpu_thread(CPUState *cpu)
tcg_cpu_init_cflags(cpu, false);
if (!single_tcg_cpu_thread) {
- cpu->thread = g_new0(QemuThread, 1);
- cpu->halt_cond = g_new0(QemuCond, 1);
- qemu_cond_init(cpu->halt_cond);
+ single_tcg_halt_cond = cpu->halt_cond;
+ single_tcg_cpu_thread = cpu->thread;
/* share a single thread for all cpus with TCG */
snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "ALL CPUs/TCG");
qemu_thread_create(cpu->thread, thread_name,
rr_cpu_thread_fn,
cpu, QEMU_THREAD_JOINABLE);
-
- single_tcg_halt_cond = cpu->halt_cond;
- single_tcg_cpu_thread = cpu->thread;
} else {
- /* we share the thread */
+ /* we share the thread, dump spare data */
/* we share the thread, release allocations from cpu_common_initfn() */
+ g_free(cpu->thread);
+ qemu_cond_destroy(cpu->halt_cond);
cpu->thread = single_tcg_cpu_thread;
cpu->halt_cond = single_tcg_halt_cond;
+
+ /* copy the stuff done at start of rr_cpu_thread_fn */
cpu->thread_id = first_cpu->thread_id;
cpu->neg.can_do_io = 1;
cpu->created = true;
Reviewed-by: Philippe Mathieu-Daudé <philmd@xxxxxxxxxx>