Re: [PATCH] powerpc/bpf: populate extable entries only during the last pass

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Naveen,

On 24/04/23 5:25 pm, Naveen N. Rao wrote:
Hari Bathini wrote:
Hello Christophe,

Thanks for the review.

On 07/04/23 11:31 am, Christophe Leroy wrote:


Le 06/04/2023 à 09:35, Hari Bathini a écrit :
Since commit 85e031154c7c ("powerpc/bpf: Perform complete extra passes
to update addresses"), two additional passes are performed to avoid
space and CPU time wastage on powerpc. But these extra passes led to
WARN_ON_ONCE() hits in bpf_add_extable_entry(). Fix it by not adding
extable entries during the extra pass.

Are you sure this change is correct ?

Actually, I was in two minds about that owing to commit 04c04205bc35
("bpf powerpc: Remove extra_pass from bpf_jit_build_body()").

Right, but Christophe's series adding complete passes during the extra_pass phase added 'extra_pass' parameter back to bpf_jit_build_body().


During the extra pass the code can get shrinked or expanded (within the
limits of the size of the preliminary pass). Shouldn't extable entries
be populated during the last pass ?

Unlikely, but the intention there was to eliminate a regression in case
extra_pass ends up being 'false' always in any subsequent change.

But, the current approach risks generating incorrect offsets in the extable. The main motivation for the extra pass is to generate more compact code, so there is a good chance that offsets are going to change (especially with bpf subprogs).


- Hari


Fixes: 85e031154c7c ("powerpc/bpf: Perform complete extra passes to update addresses")
Signed-off-by: Hari Bathini <hbathini@xxxxxxxxxxxxx>
---
   arch/powerpc/net/bpf_jit_comp32.c | 2 +-
   arch/powerpc/net/bpf_jit_comp64.c | 2 +-
   2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c
index 7f91ea064c08..e788b1fbeee6 100644
--- a/arch/powerpc/net/bpf_jit_comp32.c
+++ b/arch/powerpc/net/bpf_jit_comp32.c
@@ -977,7 +977,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
               if (size != BPF_DW && !fp->aux->verifier_zext)
                   EMIT(PPC_RAW_LI(dst_reg_h, 0));
-            if (BPF_MODE(code) == BPF_PROBE_MEM) {
+            if (BPF_MODE(code) == BPF_PROBE_MEM && !extra_pass) {

It is probably better to pass 'extra_pass' into bpf_add_extable_entry() to keep all those checks together.


Thanks for the review and also the suggestion (offline) to reset index
during extra pass, for my concern about possible regression. Posted v2.

- Hari



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux