computing-offload/generic_vdpa/qemu/accel-tcg-Optimize-jump-cache-flush-during-tlb-range.patch
jiangdongxu 79c4324644 add generic_vdpa basecode
Change-Id: I2d302dda68298877c65c99147f5bf22186a59aac
2024-09-19 17:19:46 +08:00

50 lines
1.7 KiB
Diff

From 28ca488c585c556ce04419f927d13d46771e1ea4 Mon Sep 17 00:00:00 2001
From: tangbinzy <tangbin_yewu@cmss.chinamobile.com>
Date: Tue, 18 Jul 2023 06:29:51 +0000
Subject: [PATCH] accel/tcg: Optimize jump cache flush during tlb range flush
mainline inclusion commit cfc2a2d69d59f02b32df3098ce17e10ab86d43c6 category:
bugfix
---------------------------------------------------------------
When the length of the range is large enough, clearing the whole cache is
faster than iterating over the (possibly extremely large) set of pages
contained in the range.
This mimics the pre-existing similar optimization done on the flush of the
tlb itself.
Signed-off-by: Idan Horowitz <idan.horowitz@gmail.com>
Message-Id: <20220110164754.1066025-1-idan.horowitz@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: tangbinzy <tangbin_yewu@cmss.chinamobile.com>
---
accel/tcg/cputlb.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index b69a953447..03526fa1ab 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -783,6 +783,15 @@ static void tlb_flush_range_by_mmuidx_async_0(CPUState *cpu,
}
qemu_spin_unlock(&env_tlb(env)->c.lock);
+ /*
+ * If the length is larger than the jump cache size, then it will take
+ * longer to clear each entry individually than it will to clear it all.
+ */
+ if (d.len >= (TARGET_PAGE_SIZE * TB_JMP_CACHE_SIZE)) {
+ cpu_tb_jmp_cache_clear(cpu);
+ return;
+ }
+
for (target_ulong i = 0; i < d.len; i += TARGET_PAGE_SIZE) {
tb_flush_jmp_cache(cpu, d.addr + i);
}
--
2.41.0.windows.1